Everybody likes it when their work is recognized, especially when the recognition is coming from leaders in the field. Over the course of the past week, your humble correspondent has had work noted in two very different realms. One of my posts here on Gravitropic was linked by several people, most visibly by Dave Winer, the developer of the software I was discussing, resulting in a big (for this site) spike in traffic. At the same time, an article was published in Current Biology that cited our recent paper on lateral root patterning. Both events represent the same principle and illustrate the power of the citation. At the same time, there seem to be significant differences between online links and scholarly citations that may be worth considering. I wonder whether scholarly writing could take some lessons from online linking.
When I link to an article or blog post on the web, or when I cite an article as a building block in an argument, I am assigning credibility to that source. I am usually saying I agree with the point being made, and in the case of a scientific article, I am likely proposing to build on top of that finding. Sure, sometimes we link to outlandish articles online just to point and mock, or we cite findings that are refuted by the results at hand, but those are the exception. By and large, to cite or link is to endorse.
It follows from this that I judge the work I am citing to be of high quality or in some way noteworthy, and the act of citing it helps it grow in status. In the case of online articles, more links from quality sources leads to greater status and higher ranking in search results. But for scientific articles, the surfacing of high impact papers is not an automatic process. It seems to rely more on a researcher noticing a particular work cited by multiple sources rather than an algorithm returning a work closer to the top of the search results. I would posit that the process of identifying important work and incorporating it is part of the art of practicing science. Of course you can set a database like Web of Science to sort by number of times cited, but that tends not to be all that useful. I wonder if the identification of important papers in a field is done algorithmically by any scholarly databases in a way similar to PageRank?
Links and citations also differ when it comes to which side of the link has the most value. In the case of research and scholarship, articles that become highly cited earn their authors an increasing level of influence within a field. While this is true up to a point with online links, much of the value in this field seems to lie with those entities — individuals or companies — that do the linking. One example of this is Google itself, which created value by “organizing the world’s information“. They drive so much of the traffic on the web by acting as an index and arbiter of quality for a given keyword or topic. In a similar way, sites like Daring Fireball that link to important articles in a particular field have become extremely valuable, in part for their original writing, but also due to the web traffic they drive.
I wonder why there are not such drivers of traffic in specific, narrow fields of research, experts that both express an opinion and drive viewers to particular articles worth reading. In a certain sense this is what review articles do, but on a timescale of years. Is this ‘middleman’ missing because of the time and caution required to puzzle together a research mystery? Is it missing because nobody has the time? Maybe the missing element in scholarly work is the ‘pageview’ metric? Will the incorporation of page views for more progressive online publishers like the PLoS journals change any of this?