I’ve been reading on plant water sensing to get some better background for projects we’re starting in the lab this summer. I came across the photo below in a paper describing the identification of a gene involved in sensing water gradients, called miz1, short for MIZU-KUSSEI1, the words for “water” and “tropism” in Japanese.
The photo shows an elegant experiment the researchers designed to pick out mutants in water sensing. They allowed the roots to grow in a Petri dish along a block of agar (seen in the upper left part of each panel) and into an opening. Normally, an open space in a closed Petri dish would have very high humidity, but they added a solution that soaks up water vapor, so the air was very dry.
The two photos across the top (D1 and D2) show the response of a wild-type root when it grows into the dry chamber — it immediately turns back toward the agar surface, where the water is. The two photos across the bottom (E1 and E2) show the mutant failing to curve back toward the agar. They found this mutant like a needle in a haystack, by looking at 20,000 mutant lines for ones like this, that fail to respond to the water vapor gradient.
The researchers have gone on to study this gene in great detail, and have made a number of exciting discoveries about how plants sense water.
Citation: Kobayashi, A., A. Takahashi, Y. Kakimoto, Y. Miyazawa, N. Fujii, A. Higashitani, and H. Takahashi. 2007. A gene essential for hydrotropism in roots. Proceedings of the National Academy of Sciences of the United States of America 104: 4724–4729.
Everybody likes it when their work is recognized, especially when the recognition is coming from leaders in the field. Over the course of the past week, your humble correspondent has had work noted in two very different realms. One of my posts here on Gravitropic was linked by several people, most visibly by Dave Winer, the developer of the software I was discussing, resulting in a big (for this site) spike in traffic. At the same time, an article was published in Current Biology that cited our recent paper on lateral root patterning. Both events represent the same principle and illustrate the power of the citation. At the same time, there seem to be significant differences between online links and scholarly citations that may be worth considering. I wonder whether scholarly writing could take some lessons from online linking.
When I link to an article or blog post on the web, or when I cite an article as a building block in an argument, I am assigning credibility to that source. I am usually saying I agree with the point being made, and in the case of a scientific article, I am likely proposing to build on top of that finding. Sure, sometimes we link to outlandish articles online just to point and mock, or we cite findings that are refuted by the results at hand, but those are the exception. By and large, to cite or link is to endorse.
It follows from this that I judge the work I am citing to be of high quality or in some way noteworthy, and the act of citing it helps it grow in status. In the case of online articles, more links from quality sources leads to greater status and higher ranking in search results. But for scientific articles, the surfacing of high impact papers is not an automatic process. It seems to rely more on a researcher noticing a particular work cited by multiple sources rather than an algorithm returning a work closer to the top of the search results. I would posit that the process of identifying important work and incorporating it is part of the art of practicing science. Of course you can set a database like Web of Science to sort by number of times cited, but that tends not to be all that useful. I wonder if the identification of important papers in a field is done algorithmically by any scholarly databases in a way similar to PageRank?
Links and citations also differ when it comes to which side of the link has the most value. In the case of research and scholarship, articles that become highly cited earn their authors an increasing level of influence within a field. While this is true up to a point with online links, much of the value in this field seems to lie with those entities — individuals or companies — that do the linking. One example of this is Google itself, which created value by “organizing the world’s information“. They drive so much of the traffic on the web by acting as an index and arbiter of quality for a given keyword or topic. In a similar way, sites like Daring Fireball that link to important articles in a particular field have become extremely valuable, in part for their original writing, but also due to the web traffic they drive.
I wonder why there are not such drivers of traffic in specific, narrow fields of research, experts that both express an opinion and drive viewers to particular articles worth reading. In a certain sense this is what review articles do, but on a timescale of years. Is this ‘middleman’ missing because of the time and caution required to puzzle together a research mystery? Is it missing because nobody has the time? Maybe the missing element in scholarly work is the ‘pageview’ metric? Will the incorporation of page views for more progressive online publishers like the PLoS journals change any of this?
Yesterday marked the first day of the summer research season. One of the things I really like about my job is the cycles of the academic year: the excitement and anticipation of the new school year every fall, the sense of exhaustion just before the break, autumn on campus (you can almost picture the tweed, I know), intermission between semesters, etc. Summer research with students is one of my favorite times.
I was at the dentist yesterday morning, and he was asking what projects I was working on in the lab for the summer. I told him a few of the new directions we were heading and he commented that he hoped everything went well and that we had a successful summer. That exchange started me thinking about what defines a successful summer for me, and it may not be exactly what you would think.
Of course the highest form of success for summer research is to generate publishable data, and I make this the clear goal for the students. In an ideal world, they would work on an important question, carry out carefully controlled experiments in a systematic way, and find a clear difference between their control and experimental treatments. Although the first 3 of these factors are under their control, there is no way to know the outcome of an experiment and its significance in advance, so I try not to think of success in terms of the outcomes of experiments and whether or not they represent publishable results. If I were at a research university, I’m sure I would have a different perspective, but I’m not, and the nature of working with undergraduates doesn’t permit this definition of success.
If the publishability of the results doesn’t determine the success of a summer research experience, what does? For me, I think summer research has been successful when a student has done real research. That means they grasped a question (see below for more on this), conceived of an experiment to test a hypothesis, performed the experiment, analyzed the data, and evaluated the results in light of their original hypothesis. Sometimes (hopefully) their work forms a unit on or around which other units can be built into a paper.
‘Grasping a question’ is not to say they get free reign to choose any topic they want. In my lab, students have to focus on an area that supports the direction of the lab as a whole. I think it’s important that they own the project to some degree, but the only way to ensure the importance of their project is to limit it to something in my area of expertise.
That tube-like thing that characterizes daffodil flowers is, in fact, as funky as it looks:
We found that the corona develops from the hypanthium, and is not simply en extension of the petals or stamens,” says Dr Scotland. “The corona is an independent organ, sharing more genetic identity with stamens, and which develops after the other organs are fully established.
Quick, what’s the first thing you think of when you think about plants? A tree? Leaves? A flower? Chances are slim that you thought first of a root, yet roots make up nearly half of the typical plant’s body. They are the hidden side of the plant, feeling their way in the dark, around stones and through soil, in search of the water and minerals needed for survival. They sense things like moisture gradients, solid objects like rocks and pebbles, and can tell up from down, using these cues in ways that remain largely unknown to guide their growth. Considering that we humans are completely dependent upon our photosynthetic, green cousins for the food we eat and the air we breathe, and considering the vulnerability of plants to drought, we would do well to learn more about how roots do what they do.
Despite making up the vast majority of the root system, how lateral roots choose their path remains uncharted territory. For example, lateral roots are content to grow sideways for long periods, a situation that is anathema to primary roots, which react swiftly with a course correction when they find themselves growing sideways. It isn’t like lateral roots are unable, however, to mount such a course correction. When displaced from their route, they will return to it, whatever it was. But when the course was not-quite-vertical, how do they know where to go? Are they using the same cellular tools as the primary root to detect gravity? Are the same circuits that activate curvature in the primary root activated in lateral roots? Given their role in nutrient uptake, do lateral roots change their course when nutrient conditions change? In our most recent paper, we set out to address some of these questions about lateral root growth. Over the coming weeks, I’ll be posting more on how we carried out our experiments and what we found out.
Elementary school students often learn that plants grow toward the light. This seems straightforward, but in reality, the genes and pathways that allow plants to grow and move in response to their environment are not fully understood. Leading plant scientists explore one of the most fundamental processes in plant biology—plant movement in response to light, water, and gravity—in a January Special Issue of the American Journal of Botany.
Lateral root orientation and gravitropism are affected by Pi status and may provide an important additional parameter for describing root responses to low Pi. The data also support the conclusion that gravitropic setpoint angle reacts to nutrient status and is under dynamic regulation.
I’ll post again on the work that went into our paper, including a breakdown of the inputs of time and talent that made this work possible. In short though, three awesome students worked many hours in the lab over the course of four years to produce these insights.
A few days ago Kevin Folta, a colleague whose main research focuses on strawberry genetics and crop improvement, tweeted a link to an interview he did with HuffPost Science. The video sums up a lot of the same ideas I try to communicate in my classes about genetically-modified foods, both their risks and their benefits. The post on HuffPo Science has received almost 2000 comments as of this writing, so it clearly struck a nerve.
One of the points he makes is that humans have been doing genetic modification for tens of thousands of years. All of our crop plants are the result of mutation, selection, natural hybridization, and in some cases, deliberate hybridization. There is no such thing as ‘natural corn’ — it is the product of human civilization and could not survive without us. And when genetic modification happens naturally or through traditional plant breeding, whole genomes are scrambled. Modern genetic engineering allows targeted access to a single gene at a time, either by inserting a new, well-studied gene into a plant, or regulating the expression of an existing gene. But for some reason, the backlash against the modern, targeted approach is far beyond that of other techniques.
Sometimes the backlash is motivated by a disdain for the large companies that control so much of our food supply (and our politicians). But there is also a genuine fear that scientists are messing around with things they don’t understand and it will kill us all, or at least seriously mess up our lives and environments. I am all in favor of testing new crops for human and environmental safety. I believe crop biotechnology deserves neither a free pass nor impossible regulations. To hold transgenic crops to a standard that they be proven to do no harm to an ecosystem (an effectively impossible claim to uphold) when no other crop has ever been held to such a standard is hypocritical.
Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.
To me, this is an indicator of the power of, first scholarly databases, then the internet, to make important work more discoverable. When I began graduate school, I remember doing literature work in the library and watching all the faculty come for their weekly journal check-in to “stay current” (or name-check themselves and their pals). How much more efficient and effective it is now to rely on things like saved database searches to keep us informed of important advances in our field. And database searches democratize by returning all related citations, not just those from so-called top-tier journals. This is a great step forward, I think.
While it’s great to see such widespread coverage of a plant science discovery, as I read through each report I couldn’t help but notice the disconnect between the bold titles and the substance of each article.
Here is the science: the researchers found the molecular identity of a historical mutation in fruit development that plant breeders have selected for that makes the fruits more uniform in color and lighter green. The gene encodes a transcription factor that controls chloroplast development. When mutated, as in almost all cultivated tomatoes, it leads to fruits with fewer chloroplasts, which explains the lighter, more uniform coloration. It also leads to lower carbohydrate and pigment concentrations, which the researchers suggest could impact flavor.
The problem with the bold article titles is, the flavor of a tomato is much, much more complex than its sugar content. Tomatoes contain over 400 volatile compounds, each of which interacts with the others and nonvolatile compounds to produce the overall flavor profile. Understanding how each of those hundreds of molecules is formed and processed in the fruit throughout ripening is likely to yield better tasting tomatoes, and maybe having more total carbohydrates will be a part of that process. But the original article didn’t even begin to explore flavor, so why is that the take-home message of all the news pieces?
To me, the Science paper is extremely interesting, but not for the reasons highlighted in these articles. This is a case of classical breeding carrying out selection on a trait that seemed to improve the crop, at least from the standpoint of the grower, making it more consistent and easier to market. But now that we know what (in the molecular sense) they were selecting, we can see it was probably a poor tradeoff. This is yet another in a long line of links between classical breeding choices and molecular genetics, and this represents an excellent way to educate the public that all of our food is genetically modified! It all has DNA! Genes, even! I continue to be fascinated as we uncover the ancient — and recent — mutations that produced the foods we know, and I think it provides a great chance to inform and begin a dialog over the nature of farming, breeding, and genetics.