by Jonathan Latham, PhD
Professor Pamela Ronald is probably the scientist most widely known for publicly defending genetically engineered (GE or GMO) crops. Her media persona, familiar to readers of the Boston Globe, the Wall Street Journal, the Economist, NPR, and many other global media outlets, is to take no prisoners.
After New York Times chief food writer Mark Bittman advocated GMO labelling, she called him “a scourge on science” who “couches his nutty views in reasonable-sounding verbiage”. His opinions were “almost fact- and science-free” continued Ronald. In 2011 she claimed in an interview with the US Ambassador to New Zealand: “After 14 years of cultivation and a cumulative total of two billion acres planted, GE crops have not caused a single instance of harm to human health or the environment.”
This second career of Pamela Ronald’s, as advocate of GMOs (which also includes being a book author, and contributor to and board member of the blog Biofortified) is founded on her first career: at the University of California in Davis she is Professor in the Department of Plant Pathology, Director of the Laboratory for Crop Genetics Innovation, and Director of Grass Genetics at the Joint BioEnergy Institute, among other positions.
This background is relevant because Pamela Ronald is now also fighting on her home front. Her scientific research has become the central question in a controversy that may destroy both careers. In the last year Ronald’s laboratory at UC Davis has retracted two scientific papers (Lee et al. 2009 and Han et al 2011) and other researchers have raised questions about a third (Danna et al 2011). The two retracted papers form the core of her research programme into how rice plants detect specific bacterial pathogens (1).
When the mighty fall, others try to catch them
The first paper was retracted on January 29th 2013, from the journal PLoS One (Han et al 2011). News of the retraction was (belatedly) published on the 11th of September 2013 by the blog Retraction Watch under the headline: Doing the right thing: Researchers retract quorum sensing paper after public process (2). [CORRECTION: Jan 29th was the date the Ronald group notified PLoSOne of probable errors. Retraction formally occurred on Sept 9th. Apologies to Retraction Watch as there was no delay to explain. Footnote 2 is therefore superfluous.]
The second retraction, from Science, was officially announced a month later, on October 11th 2013 (Lee et al 2009). This time, retraction was accompanied by a lengthy explanation (Anatomy of a Retraction, by Pamela Ronald) in the official blog of Scientific American. In this article, Ronald blamed the work of unnamed former lab members from Korea and Thailand. Retraction Watch reported the retraction as: Pamela Ronald does the right thing again. Also on the same day, The Scientist magazine quoted Pamela Ronald saying it was “just a mix-up” and repeating her claim that “Former lab members who had begun new positions as professors in Korea and Thailand were devastated to learn that [we] could not repeat their work.”
Scientifically, the two retractions mean that the molecule (Ax21), identified by Pamela Ronald’s group (in Lee et al 2009), is not after all what rice plants use to detect the pathogen rice blight (Xanthomonas oryzeae) and neither is it a ‘quorum sensing’ molecule, as described in Han et al 2011.
The media coverage of the retractions didn’t query Ronald’s mea non culpa. Instead, reports added, as UC Berkeley professor Jonathan Eisen put it, ‘Kudos to Pam’ for stepping forward.
Did Pamela Ronald jump, or was she pushed?
In fact, scientific doubts had been raised about Ronald-authored publications at least as far back as August 2012. In that month Ronald and co-authors responded in the scientific journal The Plant Cell to a critique from a German group. The German researchers had been unable to repeat Ronald’s discoveries in a third Ax21 paper (Danna et al 2011) and they suggested as a likely reason that her samples were contaminated (Mueller et al 2012).
Furthermore, the German paper also asserted that, for a theoretical reason (3), her group’s claims were inherently unlikely.
In conclusion, the German group wrote:
“While inadvertent contamination is a possible explanation, we cannot finally explain the obvious discrepancies to the results in..…..Danna et al. (2011)”
Pamela Ronald, however, did not concede any of the points raised by the German researchers and did not retract the Danna et al 2011 paper. Instead, she published a rebuttal (Danna et al 2012) (4).
The subsequent retractions, beginning in January 2013 (of Lee et al 2009 and Han et al 2011), however, confirm that in fact very sizable scientific errors were being made in the Ronald laboratory. But more importantly for the ‘Kudos to Pam’ story, it was not Pamela Ronald who initiated public discussion of the credibility of her research.
Was it “just a mix-up”?
Reporting of the retractions also accepted Pamela Ronald’s assertions that simple errors by two foreign and now-departed laboratory members were to blame. But her more detailed description of events, which appeared in Footnotes with technical details for those in the discipline below her Scientific American blog, contradict that notion.
Ronald’s footnotes admit two mislabellings, along with failures to establish and use replicable experimental conditions, and also minimally two failed complementation tests. Each mistake appears to have been compounded by a systemic failure to use basic experimental controls (5). Thus, leading up to the retractions were an assortment of practical errors, specific departures from standard scientific best practice, and lapses of judgement in failing to adequately question her labs’ unusual (and therefore newsworthy) results.
Who is responsible?
The International Committee of Medical Journal Editors (ICMJE ) published the first and most widely cited principles of authorial ethics in science. These recommendations are followed by thousands of medical and other scientific journals. The following is the first paragraph of the section regarding authorship:
“Authorship confers credit and has important academic, social, and financial implications. Authorship also implies responsibility and accountability for published work. The following recommendations are intended to ensure that contributors who have made substantive intellectual contributions to a paper are given credit as authors, but also that contributors credited as authors understand their role in taking responsibility and being accountable for what is published.” (italics added)
The ICMJE guidelines go on to state that authorship should not be conferred on those who do not agree to be accountable for all aspects of the accuracy and integrity of the work.
Some scientific journals, have their own policies that provide more specifics. The journal Arteriosclerosis, Thrombosis, and Vascular Biology states:
“Principal investigators are ultimately responsible for the integrity of their research data and, thus, every effort should be made to examine and question primary data.”
“Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content.”
Lastly, Science (publisher of Ronald’s retracted Lee et al 2009 paper) has this policy on authorship:
“The senior author from each group is required to have examined the raw data their group has produced.”
It is perhaps surprising then that a senior scientist should publicly disclaim responsibility for research carried out in their own laboratory.
Footnotes
(1) Pamela Ronald appeared to be a leader in understanding the mechanisms by which rice, and other plants, detect and resist important pathogens. She and others have (or in the case of Ronald, thought they had) identified specific molecules characteristic of each pathogen that are detected by dedicated receptors in plants. In this case, rice cultivars resistant to the bacterium Xanthomonas oryzeae detect a small protein molecule called Ax21 that derives from the pathogen. The ability to detect Ax21 enables rapid activation of defences and thus confers resistance to the pathogen. This line of research, as it pertains to Pamela Ronald and Ax21, is now retracted.
(2) Retraction Watch does not explain the delay of over 8 months between the retraction and their report of it. Neither is the “after public process” part of the headline explained.
(3) The theoretical reason is that molecules that warn of incipient plant pathogen infection (as Ax21 was supposed to do) are typically detected by receptors at very low concentrations–otherwise they wouldn’t serve as useful warning molecules. Yet in the experiments from Pamela Ronald’s laboratory (Lee et al. 2009 and Danna et al. 2011) Ax21 is required to be present at concentrations millions of fold higher than other elicitors to achieve the same effects (Mueller et al 2012).
(4) The rebuttal argued, among other points, that: “experimental differences may explain the failure of Mueller et al. (2012) to observe FLS2-dependent defense-related responses.” (Danna et al 2012).
(5) The errors noted by Pamela Ronald in her Scientific American blog were: a) “By careful sleuthing, [lab members] found that two out of 12 of the strains……were mislabeled.” b)“In the more recent experiments we found that although the modified (sulfated) Ax21 peptide did induce resistance in Xa21 plants, it also induced resistance in plants lacking the Xa21 immune receptor, an important control.” c) “Furthermore, results of the pretreatment test were highly dependent on greenhouse conditions.” d) “They also made mistakes in their complementation tests of the Ax21 insertion mutant with the wild-type Ax21 gene.” (italics added). e) These errors were not caught prior to publication because experiments in the Ronald lab lacked controls. Apparently: “When laboratory members first established the pretreatment assay years ago, they included diverse controls to optimize the assays. However, in subsequent experiments, some of the controls were dropped to reduce the size of the experiments.”
A Monsanto/Cargill joint venture has quietly withdrawn its application for high-lysine transgenic corn after EU regulators on the European Food Safety Agency (EFSA) GMO panel raised questions about its safety for human consumption.
Made by Renessen LLC, LY038 would have been the only high lysine corn available and had already been approved for food use in Japan, S. Korea, Canada, Australia and New Zealand, and for cultivation in the US, although it has never been grown. Although LY038 is not intended for human consumption, the likelihood of genetic cross-contamination means that EU food approval was necessary for commercial growing of the crop anywhere.
Withdrawal therefore means that transgenic high-lysine corn has been abandoned as a commercial proposition, at least for the foreseeable future. Withdrawal was not announced by any of the companies involved but is indicated on the GMO Compass website and confirmation was obtained by the campaigning group GM-free Cymru. In a letter obtained by GM-free Cymru, Renessen claims that withdrawal was “for commercial reasons”. These were not specified and none of the commercial swine experts we contacted could tell us what those reasons might be.
LY038 corn contains the enzyme DHDPS (dihydrodipicolinate synthase) from Corynebacterium glutamicum, which leads to the accumulation of approximately 50-fold higher levels of free lysine in the maize kernel. It is intended as an alternative to lysine supplementation, in particular for pigs feeding on a corn/soymeal- based diet. The market size for lysine was estimated at 450,000 metric tons in 2000.
The specific safety questions raised by the regulators were principally over the safety of LY038 when cooked. LY038 contains very high levels of free lysine. Lysine is known to react on heating with sugars to form chemical compounds called advanced glycoxidation endproducts (AGEs) that are linked to numerous diseases, including diabetes, Alzheimer’s disease and cancer. Member states, whose comments must be considered by the EFSA GMO panel, decided that further experiments were required before approval could be given. As well as questions over these lysine conjugates, questions were also asked about unexplained chlorosis in experimental trials and unexplained poor performance of chickens fed LY038.
A second category of questions raised was whether appropriate controls were used by the applicant. Some consider that this goes to the heart of the scientific nature of the approval process. The Codex Alimentarius guidelines indicate that an otherwise genetically identical cultivar, minus the transgene, is the appropriate control for a GMO safety experiment. According to Jack Heinemann, director of the The Centre for Integrated Research in Biosafety (INBI) and one of the authors of a critique of LY038 “EFSA enforced the Codex comparator. I have not seen an application since 2002 that met the Codex comparator standard”. No matter what the experiment “ If you don’t have a proper control you can’t draw valid scientific conclusions” concurs Doug Gurian-Sherman, senior scientist at the Union of Concerned Scientists.
Withdrawal of LY038 corn will disappoint the industry not only because it is the first GMO to be withdrawn after safety questions were raised but also because withdrawal comes just as the agricultural biotechnology industry is attempting to demonstrate that it can deliver traits other than herbicide resistance and insect resistance. Especially, the industry would like to diversify its portfolio of traits, towards those with value to end-users and away from traits with value only to industrial agriculture. The concern, however, is that these more complex traits may not only prove harder to come by, but, as happened here, also may generate novel and complex safety concerns.
The fight over rbGH (recombinant bovine growth hormone) continues, even under new ownership.
After acquiring rbGH from Monsanto, Elanco (part of Eli Lilly) has stepped up efforts to convince milk processors and the wider food industry that milk from rbGH-injected cows is safe. Central to their new campaign is a paper, commissioned through PR company Porter-Novelli, from eight prominent experts and academics in medicine and dairy science (Recombinant bovine somatotropin (rbST): a safety assessment).
The authors are Richard Raymond, former undersecretary for Food Safety at USDA, Connie Bales of Duke University Medical Centre, Dale Bauman of Cornell University, David Clemmons of the University of North Carolina, Ronald Kleinman of Harvard Medical school, Dante Lanna of the University of Sao Paolo, Stephen Nickerson of the University of Georgia, and Kristen Sejrsen of Aarhus University, Denmark. The new paper was not peer-reviewed but it was presented at the July 2009 joint annual meeting of the American Dairy Science Association, the Canadian Society of Animal Science and the American Society of Animal Science in Montreal, Canada. It argues strongly for the benefits and safety of rbGH milk and has been widely distributed by Elanco. According to a rebuttal circulated by a number of consumer advocacy organisations, however, the paper misrepresents the position of various medical bodies (1).
The paper claims, for instance, that the safety of rbGH is endorsed by the American Medical Association (AMA). Through their Campaign for Safe Food, Oregon Physicians for Social Responsibility (Oregon PSR), has pointed out that the AMA has no policy on rbGH and offers no such endorsement. Instead, they note the April 2008 AMA newsletter cites past president Ron Davis saying “Hospitals should……use milk produced without recombinant bovine growth hormone”.
The new paper also claims the same endorsement from the American Cancer Society (ACS). This claim, Oregon PSR points out, is contradicted on the ACS’s own website and this was confirmed by the ACS in an email to the Bioscience Resource Project: “The American Cancer Society (ACS) has no formal position regarding rBGH.” stated the email. Another endorsement claimed by the paper is from the American Association of Pediatrics, a claim also disputed by the coalition. “I can confirm that AAP does not endorse the safety of rbGH” wrote an AAP spokesperson to the the Bioscience Resource Project, also in an email.
The Bioscience Resource Project contacted various of the authors for clarification. One, Professor of Lactation Physiology Stephen Nickerson was unaware of any errors. Second author and Dietitian Connie Bales declined to answer questions via email or on the telephone. David Clemmons, however, accepted that the AMA, the AAP and the ACS endorsements were “technically untrue”. “We counted endorsement as failure to oppose rbGH”, he said. Lead author Richard Raymond, however, in a written statement to the Bioscience Resource Project said the authors stood by all the endorsements excepting that of the AAP. In the same statement he also clarified the papers’ assertion that 17 other “leading health organisations in the United States” also endorse “Its safety for human consumption”. Asked to identify the organisations, his list included the American Council on Science and Health, the International Food Information Council and the “White House”.
According to Rick North of Oregon PSR “Elanco’s numerous false statements and misrepresentations on endorsing organizations are only the tip of the iceberg. The entire report is riddled with similar inaccurate, misleading claims about rBGH itself.”
Dr Raymond declined to say whether the authors planned to issue a public clarification. Author Kristen Sejrsen, on the other hand remained unconcerned. “It’s only a scientific paper”, he said.
(1) The groups are: The Cancer Prevention Coalition, Consumers Union, Oregon PSR and the Institute for Agriculture and Trade Policy
Jonathan Latham and Allison Wilson (Photo Credit: Yodod)
Is it unrealistic to expect the scientific approval process for the world’s first commercial genetically engineered (GE) animal, the AquAdvantage salmon, to be rigorous and complete? Or for the applicant to present experiments that fully meet regulatory expectations? If you expect these things, it seems, you expect too much. Despite the biotech industry’s “dedication to rigorous science-based risk assessment”, the science of the AquAdvantage salmon is full of holes. Its maker, AquaBounty Technologies, has failed to provide key data on which the safety assessment process depends.
The US Food and Drug Administration (FDA) is currently considering whether to approve this salmon for sale to US consumers. If it becomes the world’s first commercial GE animal, the approval of the AquAdvantage salmon, which contains a modified growth hormone gene, will be a technological and cultural milestone. In perhaps as few as 18 months, if AquaBounty has its way, unlabeled GE salmon will be landing on the plates of consumers. So it is a fish that needs to be safe, for the public, as well as for the environment.
Congress has determined that GE animals will require FDA approval and that approval should be based solely on science. Science-based regulation is a narrow ground on which to base societal acceptability but its advantage is that, in principle, it allows the approval process to be orderly, data-based, and transparent, with requirements set out in advance (FDA’s industry guidance). There is, therefore, no good reason for an applicant to come to the table with shoddy science or missing data. However, that is what AquaBounty has done. This is a problem, in particular for the FDA, if it wishes to ensure that the approval process for the world’s first GE animal does not set an embarrassing precedent.
Key Publication Errors
The only peer reviewed publicly available data for assessing the science behind the AquAdvantage salmon is a single paper: Characterization and multigenerational stability of the growth hormone transgene (EO-1alpha) responsible for enhanced growth rates in Atlantic salmon (Yaskowiak et al. 2006). This article, researched and written by AquaBounty scientists, appeared in the scientific journal Transgenic Research in 2006. As it is AquaBounty’s sole publication on the AquAdvantage salmon, one might imagine, given its importance, that AquaBounty would have taken particular care to ensure its credibility and accuracy. It is surprising, therefore, to discover that the paper contains basic errors that prevent the reader from checking the author’s conclusions.
These mistakes can be summarised as follows: the legend for figure 1 (a Southern blot) wrongly identifies two lanes, and the transgene construct itself is mislabeled. In figure 5, the data showing the DNA sequence of the inserted transgene is entirely mangled. In this figure, two separate errors omit sequence stretches adding up to thousands of base pairs. A third error results in a long stretch of sequence being copied multiple times. In addition, the figure legend includes a typo. These errors are described in more detail in a footnote1.
These mistakes mean that the data presented in the paper contradict its written conclusions regarding the nature of the integrated transgene (Yaskowiak et al. 2006). The errors in figure 5 were later corrected in an erratum (Yaskowiak et al. 2007), but readers are still left to decipher figure 1 for themselves.
Has AquaBounty Identified the Right Transgene?
The primary purpose of a scientific paper (assuming the data have been presented accurately) is to allow the reader to verify that the data support the conclusions that are drawn. Yaskowiak et al. claim to have reached two fundamental conclusions: 1) that Aquabounty has created a GE salmon containing a single growth hormone transgene and 2) that this transgene is inherited stably through four generations. Of these two conclusions, the first, that there is a single insertion of the growth hormone gene, is never definitively established in the paper (nor anywhere else)2 and the second depends on the first.
In the paper, Yaskowiak et al. provide reasonable evidence that at least the transgene promoter is present as a single copy. They further claim to have evidence (data not shown) that the downstream regulatory sequence is present only as a single copy. However, the authors never use as a molecular probe the all-important growth hormone sequence itself. Consequently, their conclusions that extra copies or fragments of the growth hormone transgene are not present, and further, that the transgene they do analyse (which they call EO-1alpha) is responsible for the fish’s growth phenotype, are both dependent on extrapolation from the detection of regulatory sequences rather than detection of the gene itself. AquaBounty’s experiments, therefore, leave open the possibility that there are additional undetected copies of the growth hormone gene linked to the insertion site3.
Aquabounty Fails to Characterise the Transgene Insertion Site
AquaBounty also claims to have characterised the site of insertion of EO-1alpha. The basis for this claim is identification of repeated DNA sequences (that are similar to each other) flanking the EO-1alpha transgene. There are many weaknesses in this claim. For a start, the authors cannot say how much DNA has been lost during transgene insertion or whether the DNA sequences they identify as flanking the transgene were originally found at that genomic location, or even whether they originate from the salmon genome at all. A definitive description of the insertion site would show this, but this description can only be obtained by sequencing the wild-type (non-transgenic) copy of the genetic locus for comparison. AquaBounty does not have this information and so all of AquaBounty’s assertions regarding the insertion site are necessarily guesswork.
The possibility that large pieces of salmon genomic DNA may have been lost from the insertion site, or rearranged, is acknowledged by the FDA in its report to the Veterinary Medicines Approval Committee (VMAC). In this report, however, the FDA assumes (i.e. guesses) that any sequences lost were “nonessential”. Considering that the entire purpose of transgene insertion site analysis is to establish definitively, by the gathering of data, just this kind of fact, this is quite an assumption.
A consequence of incomplete scientific assessment is assumption-based reasoning
This analysis of the science of AquAdvantage Salmon raises a host of questions. For example, what happened to the peer review process at the journal Transgenic Research4? Why does AquaBounty stop short of establishing that there is only one growth hormone gene, and again fail to establish conclusively that there is limited genetic damage from the insertion? Is AquaBounty simply cutting corners, or do they have something to hide?
The most pertinent issues, however, are arguably for the FDA, since it is the federal agency charged with protecting the public. First, inadequate molecular characterisation means that there is no definitive description of the transgenic event contained in the AquaBounty Salmon. The FDA, ultimately, does not actually know what it is being asked to approve.
Secondly, without an accurate molecular characterization of the insertion site, the effectiveness of the approval process is compromised. For example, the phenotypic analysis of the AquAdvantage salmon is weak (FDA’s report to the VMAC). VMAC justifies this weakness in part by proposing (without presenting any supporting data) that a simple insertion site implies a low probability of unanticipated consequences (FDA’s report to the VMAC). Since the simplicity of the insertion site was never actually established, this is a hypothesis that rests entirely on assumptions and not data.
Thirdly, although the FDA has not reached a final decision, it is believed to consider that labeling of the AquAdvantage salmon is unnecessary because it is not “materially” different to a wild-type salmon. As a Biotechnology Industry Organization representative put it in the Washington Post “Extra labelling confuses the consumer because it differentiates products that are not different”. To be credible, this logic presupposes that someone qualified has actually looked for differences and not found them. Characterisation of the insertion site is the first and most basic step in this process. AquaBounty and the FDA have bypassed this scientific hurdle and settled for assumption-based reasoning.
Perhaps it was the same entrepreneurial spirit that motivated Congress to determine that ethics, morals and wider socioeconomic questions should be left out of the GE approval process, that also motivated the FDA to decide that they could leave out the science as well?
A Meaningless Standard?
This analysis has demonstrated basic weaknesses in the scientific support for any approval of AquaBounty’s AquAdvantage salmon. One could make yet more assumptions and argue that these lapses are unlikely to have serious consequences in the real world. For example, even if there is another growth hormone transgene present, it is not probable that it would affect food safety or the environmental consequences of an AquAdvantage salmon escape. However, it is our opinion, with so little data available about this salmon, that any such conclusion is grossly premature. Moreover, as the Consumer’s Union comments to FDA show there are in fact good grounds to be concerned about the safety of this fish.
One conclusion that can be reached, however, is an important procedural one: the AquaBounty application clearly does not meet the scientific stipulations of FDAs guidance document. The guidance document requests “the number and characterisation of the insertion sites…[defined as] the genomic location in the GE animal” and goes on to say “We consider this component critical” and even later “You should fully characterize the final stabilized rDNA construct” (FDA’s guidance for industry). Given this wording, it is surprising that the FDA has interpreted AquaBounty’s data as being more than sufficient.
As the very first application for a GE animal, the FDA’s response to the AquAdvantage salmon sets a precedent. It must now decide whether it wishes to stand by its original science-based guidelines or approve the AquAdvantage salmon. FDA’s response will be interesting because this salmon is attracting a lot of attention. This is not just because the AquAdvantage salmon is a GE animal, and not just because most other commercial GE organisms are animal fodder or ingredients for processed food. The probable explanation of why this salmon is a prominent topic of conversation is that salmon is the meat of choice of a significant and well-connected social grouping: well-educated consumers who consider themselves health-conscious.
In our complex world, where the political messages coming from national capitals are either mixed or manipulated, voters search for bellwethers, actions that give simple and clear clues to their leaders’ inclinations and intentions. Approval without labeling of the AquaBounty salmon would send a very clear message and might just turn out to be an unexpectedly big political mistake for the Obama administration.
Footnotes:
(1) The first data figure (Southern blot; Fig 1b) describes a Southern blot analysis designed to determine the number of copies of the transgene integrated into the salmon genome. There are seven lanes on the blot, including the marker lane. The second and third lanes are labeled incorrectly. Lane 2 is mislabeled as lane 3 and lane 3 (the second data point) is mislabeled as lane 2. This error can be spotted by logic alone: DNA cut by two enzymes cannot possibly be longer than DNA cut by one, when one of the enzymes is the same. The labeling mistake therefore should have been easy to spot. Further errors are found in figure 5. Figure 5 depicts the sequence data confirming the analysis of the transgene insertion site. We identified three separate mistakes in this figure: (a) except on the first page of figure 5 (page 471), the last two base pairs of every line of sequence are missing. Over six pages this adds up to 476 base pairs, i.e. two out of every fifty base pairs; (b) the sequence extending from base pairs 2618 to 3267 (using the numbering of the transgene itself) is quadruplicated, meaning the exact same data (this time 1,950 base pairs) appears four times within figure 5; and (c) Towards the end of figure 5, 7,653 base pairs, starting just after the growth hormone coding sequence are missing entirely. Both figures also contain “typographic errors”. The transgene construct, for example, is mislabeled once in figure 1. The errors in figure 5. (but not those in figure 1) are the subject of a nine-page erratum published subsequently (Yaskowiak et al. 2007).
(2) The possibility of extra copies is not idle speculation. Other authors have identified complex multiple transgene insertion events in salmon (Uh et al. 2006).
(3) There is unlikely to be an unlinked transgene since the AquAdvantage salmon has been backcrossed six times.
(4) Transgenic Research is a journal that frequently publishes self-evaluations and risk assessments of corporations’ own products.
References
Uh, M. Khattra, J. and Devlin, R.H. (2006) Transgene constructs in coho salmon (Oncorhynchus kisutch) are repeated in a head-to-tail fashion and can be integrated adjacent to horizontally-transmitted parasite DNA. Transgenic Research 15: 711-727.
Yaskowiak E.S., Shears, M.A., Agarwal_Mawal A., and Fletcher, G.L. (2006) Characterization and multigenerational stability of the growth hormone transgene (EO-1alpha) responsible for enhanced growth rates in Atlantic Salmon. Transgenic Research 15: 465-480.
Yaskowiak E.S., Shears, M.A., Agarwal-Mawal A., and Fletcher, G.L. (2007) Erratum to Transgenic Res. DOI: 10.1007/s11248-006-0020-5.
Just before his appointment as head of the US National Institutes of Health (NIH), Francis Collins, the most prominent medical geneticist of our time, had his own genome scanned for disease susceptibility genes. He had decided, so he said, that the technology of personalised genomics was finally mature enough to yield meaningful results. Indeed, the outcome of his scan inspired The Language of Life, his recent book which urges every individual to do the same and secure their place on the personalised genomics bandwagon.
So, what knowledge did Collins’s scan produce? His results can be summarised very briefly. For North American males the probability of developing type 2 diabetes is 23%. Collins’s own risk was estimated at 29% and he highlighted this as the outstanding finding. For all other common diseases, however, including stroke, cancer, heart disease, and dementia, Collins’s likelihood of contracting them was average.
Predicting disease probability to within a percentage point might seem like a major scientific achievement. From the perspective of a professional geneticist, however, there is an obvious problem with these results. The hoped-for outcome is to detect genes that cause personal risk to deviate from the average. Otherwise, a genetic scan or even a whole genome sequence is showing nothing that wasn’t already known. The real story, therefore, of Collins’s personal genome scan is not its success, but rather its failure to reveal meaningful information about his long-term medical prospects. Moreover, Collins’s genome is unlikely to be an aberration. Contrary to expectations, the latest genetic research indicates that almost everyone’s genome will be similarly unrevealing.
We must assume that, as a geneticist as well as head of NIH, Francis Collins is more aware of this than anyone, but if so, he wrote The Language of Life not out of raw enthusiasm but because the genetics revolution (and not just personalised genomics) is in big trouble. He knows it is going to need all the boosters it can get.
What has changed scientifically in the last three years is the accumulating inability of a new whole-genome scanning technique (called Genome-Wide Association studies; GWAs) to find important genes for disease in human populations1. In study after study, applying GWAs to every common (non-infectious) physical disease and mental disorder, the results have been remarkably consistent: only genes with very minor effects have been uncovered (summarised in Manolio et al 2009; Dermitzakis and Clark 2009). In other words, the genetic variation confidently expected by medical geneticists to explain common diseases, cannot be found.
There are, nevertheless, certain exceptions to this blanket statement. One group are the single gene, mostly rare, genetic disorders whose discovery predated GWA studies2. These include cystic fibrosis, sickle cell anaemia and Huntington’s disease. A second class of exceptions are a handful of genetic contributors to common diseases and whose discovery also predated GWAs. They are few enough to list individually: a fairly common single gene variant for Alzheimer’s disease, and the two breast cancer genes BRCA 1 and 2 (Miki et al. 1994; Reiman et al. 1996). Lastly, GWA studies themselves have identified five genes each with a significant role in the common degenerative eye disease called age-related macular degeneration (AMD). With these exceptions duly noted, however, we can reiterate that according to the best available data, genetic predispositions (i.e. causes) have a negligible role in heart disease, cancer3, stroke, autoimmune diseases, obesity, autism, Parkinson’s disease, depression, schizophrenia and many other common mental and physical illnesses that are the major killers in Western countries4.
For anyone who has read about ‘genes for’ nearly every disease and the deluge of medical advances predicted to follow these discoveries, the negative results of the GWA studies will likely come as a surprise. They may even appear to contradict everything we know about the role of genes in disease. This disbelief is in fact the prevailing view of medical geneticists. They do not dispute the GWA results themselves but are now assuming that genes predisposing to common diseases must somehow have been missed by the GWA methodology. There is a big problem, however, in that geneticists have been unable to agree on where this ‘dark matter of DNA’ might be hiding.
If, instead of invoking missing genes, we take the GWA studies at face value, then apart from the exceptions noted above, genetic predispositions as significant factors in the prevalence of common diseases are refuted. If true, this would be a discovery of truly enormous significance. Medical progress will have to do without genetics providing “a complete transformation in therapeutic medicine” (Francis Collins, White House Press Release, June 26, 2000). Secondly, as Francis Collins found, genetic testing will never predict an individual’s personal risk of common diseases. And of course, if the enormous death toll from common Western diseases cannot be attributed to genetic predispositions it must predominantly originate in our wider environment. In other words, diet, lifestyle and chemical exposures, to name a few of the possibilities.
The question, therefore, of whether medical geneticists are acting reasonably in proposing some hitherto unexpected genetic hiding place, or are simply grasping at straws, is a hugely significant one. And there is more than one problem with the medical geneticists’ position. Firstly, as lack of agreement implies, they have been unable to hypothesise a genetic hiding place that is both plausible and large enough to conceal the necessary human genetic variation for disease. Furthermore, for most common diseases there exists plentiful evidence that environment, and not genes, can satisfactorily explain their existence. Finally, the oddity of denying the significance of results they have spent many billions of dollars generating can be explained by realising that a shortage of genes for disease means an impending oversupply of medical geneticists.
You will not, however, gather this from the popular or even scientific media, or even the science journals themselves. No-one so far has been prepared to point out the weaknesses in the medical geneticist’s position. The closest up to now is from science journalist Nicholas Wade in the New York Times who has suggested that genetic researchers have “gone back to square one.” Even this is a massive understatement, however. Human genetic research is not merely at an impasse, it would seem to have excluded inherited DNA, its central subject, as a major explanation of most diseases.
The failure to find major ‘disease genes’
Advances in medical genetics have historically centered on the search for genetic variants conferring susceptibility to rare diseases. Such genes are most easily detected when their effects are very strong (in genetics this is called highly penetrant), or a gene variant is present in unusually inbred human populations such as Icelanders or Ashkenazi Jews. This strategy, based on traditional genetics, has uncovered genes for cystic fibrosis, Huntington’s disease, the breast cancer susceptibility genes BRCA 1 and 2, and many others. Important though these discoveries have been, these defective genetic variants are relatively rare, meaning they do not account for disease in most people2. To find the genes expected to perform analogous roles in more common diseases, different genetic tools were needed, ones that were more statistical in nature.
The technique of genome wide association (GWA) was not merely the latest hot thing in genetics. It was in many ways the logical extension of the human genome sequencing project. The original project sequenced just one genome but, genetically speaking, we are all different. These differences are, for many geneticists, the real interest of human DNA. Many thousands of minor genetic differences between individuals have now been catalogued and medical geneticists wanted to use this seemingly random variation to tag disease genes. Using these minor DNA differences to screen large human populations, GWA studies were going to identify the precise location of the gene variants associated with susceptibility to common disorders and diseases.
To date, more than 700 separate GWA studies have been completed, covering about 80 different diseases. Every common disease, including dozens of cancers, heart disease, stroke, diabetes, mental illnesses, autism, and others, has had one or more GWA study associated with it (Hindorff et al. 2009). At a combined cost of billions of dollars, it was expected at last to reveal the genes behind human illness. And, once identified, these gene variants would become the launchpad for the personalised genomic revolution.
But it didn’t work out that way. Only for one disease, AMD, have geneticists found any of the major-effect genes they expected and, of the remaining diseases, only for type 2 diabetes does the genetic contribution of the genes with minor effects come anywhere close to being of any public health significance (Dermitzakis and Clark 2009; Manolio et al. 2009). In the case of AMD, the five genes determine approximately half the predicted genetic risk (Maller et al. 2006). Apart from these, GWA studies have found little genetic variation for disease. The few conclusive examples in which genes have a significant predisposing influence on a common disease remain the gene variant associated with Alzheimer’s disease and the breast cancer genes BRCA1 and 2, all of which were discovered well before the GWA era (Miki et al. 1994 and Reiman et al. 1996).
Though they have not found what their designers hoped they would, the results of the GWA studies of common diseases do support two distinct conclusions, both with far-reaching implications. First, apart from the exceptions noted, the genetic contribution to major diseases is small, accounting at most for around 5 or 10% of all disease cases (Manolio et al. 2009). Secondly, and equally important, this genetic contribution is distributed among large numbers of genes, each with only a minute effect (Hindorff et al. 2009). For example, the human population contains at least 40 distinct genes associated with type I diabetes (Barrett et al. 2009). Prostate cancer is associated with 27 genes (Ioannidis et al. 2010); and Crohn’s disease with 32 (Barrett et al. 2008).
The implications for understanding how each person’s health is affected by their genetic inheritance are remarkable. For each disease, even if a person was born with every known ‘bad’ (or ‘good’) genetic variant, which is statistically highly unlikely, their probability of contracting the disease would still only be minimally altered from the average.
DNA is not the language of life or death
This dearth of disease-causing genes is without question a scientific discovery of tremendous significance. It is comparable in stature to the discovery of vaccination, of antibiotics, or of the nature of infectious diseases, because it tells us that most disease, most of the time, is essentially environmental in origin.
But such significance leaves a puzzle. Huge quantities of newspaper space has been devoted to genes, or even to hints of genes for various diseases5. By rights then, reports of the GWA results should have filled the front pages of every world newspaper for a week. So, why has this coverage not occurred?
It is possible to conceive of excuses for lack of coverage: refutation is inherently less interesting, and the GWA results have been reported piecemeal, but the more likely reason is the disturbing implications for medical geneticists who are its discoverers. The GWA studies were not envisaged as a test of the hypothesis: do genes cause common diseases? Rather, they were expected merely to straightaway identify the guilty genes that everyone “knew” were there. By apparently refuting the entire concept of genes for common diseases, the GWA studies raise fundamental questions about money spent, hopes raised, and judgments made by medical researchers.
In the first place, the GWA results raise what are probably insurmountable questions for the prospective ‘genetic revolution’ in healthcare. What use will personalised DNA testing (or sequencing) be if genes cannot predict disease for the vast majority of people? Are genes with only extremely minor effects going to be of value as drug targets? How hard is it going to be to untangle their roles in disease when they have hardly any measurable effect? Should we still suppose that pouring more resources into human genetic research is going to rescue industry’s faltering drug development pipelines? All of a sudden, the future of medicine, especially in the specialities dealing with degenerative diseases and mental illness, looks very different and a lot less promising. We no longer have a ‘complete transformation’ to look forward to, only a continuation of the incremental improvements and setbacks that have characterised medicine for the last fifty years.
Shoring up the good ship medical genetics
In a rare public sign of the struggle to come to terms with this genetically impoverished world-view, the authors of a brief review in Science magazine, Andrew Clark of Cornell University and Emmanouil Dermitzakis of the University of Geneva Medical School, Switzerland have been alone in stating the case even partly straightforwardly. According to them, the GWA studies tell us that “the magnitude of genetic effects is uniformly very small” and therefore “common variants provide little help in predicting risk” (Dermitzakis and Clark 2009). Consequently, the likelihood that personalised genomics will ever predict the occurrence of common diseases is “bleak”. This aim, they believe, will have to be abandoned altogether.
The first conclusion to be drawn from these quotes is that such directness implies that if the GWA findings are not finding their way to the front page the reason is not ambiguity in the results themselves. From a scientific perspective the GWA results, though negative, are robust and clear.
Most human geneticists view the GWA results somewhat differently, however. An invited workshop, convened by Collins and others, discussed the then-accumulating results in February 2009. The most visible outcome of this workshop was a lengthy review published in Nature and titled: “Finding the Missing Heritability of Complex Diseases.” (Manolio et al. 2009).
For a review paper that does not lay out any new concepts or directions, 27 senior scientists as coauthors might be considered overkill. “Finding the Missing Heritability”, however, should be understood not so much as a scientific contribution but as an effort to conceal the gaping hole in the science of medical genetics.
In their Science article, which was published almost simultaneously, Dermitzakis and Clark paused only briefly to consider whether so many genes could have been overlooked. Apparently, they thought it an unlikely possibility. Manolio et al., however, frame this as the central issue. According to them, since heritability measurements suggest that genes for disease must exist, they must be hiding under some as-yet-unturned genetic rock. They list several possible hiding places: there may be very many genes with exceedingly small effects; genes for disease may be highly represented by rare variants with large effects; disease genes may have complex genetic architectures; or they may exist as gene Copy Number Variants (CNVs). Since Manolio et al. presented their list, the scientific literature has seen further suggestions for where disease genes might be hiding. These include in mitochondrial DNA, epigenetics and in statistical anomalies (e.g. Eichler et al. 2010; Petronis 2010).
A problem for all these hypotheses, however, is that anyone wishing to take them seriously needs to consider one important question. How likely is it that a quantity of genetic variation that could only be called enormous (i.e. more than 90-95% of that for 80 human diseases) is all hiding in what until now had been considered genetically unlikely places? In other words, they all require the science of genetics to be turned on its head. For epigenetics, for example, there is scant evidence that important traits can be inherited through acquired modifications of DNA. Similarly, if rare variants with strong effects keep appearing in the population and causing major illnesses, why is there no evidence for this phenomenon, since it must have been occurring in the past? With unanswered questions such as these, it is unsurprising that none of the mooted explanations has attracted any kind of consensus among geneticists and in fact the CNV explanation is already looking highly unlikely (Conrad et al. 2010; The Wellcome Trust Case Control Consortium 2010). As the first of these two papers summarised “we conclude that, for complex traits, the heritability void left by genome-wide association studies will not be accounted for by CNVs” (Conrad et al. 2010).
Now, it is not impossible that human diseases follow unique genetic rules, but the apparently overlooked possibility is that the GWA studies are indicating a simple truth: that genes are not important causes of major diseases.
As stated so far, the case against the importance of genes for disease seems strong. However, the ‘missing heritability’ argument is based on numerous predictions of a large genetic contribution to human diseases that are derived from heritability measurements. These heritability estimates are obtained from the study of identical and non-identical twins. A crucial question becomes, therefore, are these estimates truly reliable?
How robust is the historical evidence for genetic causation?
A perennial feature of research into human health has always been the mountain of evidence that environment is overwhelmingly important in disease. People who migrate acquire the spectrum of diseases of their adopted country. Populations who take up Western habits, or move to cities with Western lifestyles, acquire Western diseases, and so on (e.g. Campbell and Campbell 2008). These data are hard to refute, not least because they are so simple, but geneticists, when discussing them, invariably wheel out their own version of incontrovertible evidence: twin studies of the heritability of complex diseases. When Francis Collins talks about ‘missing heritability’ it is to studies such as these that he is referring. They provide the basic evidence for genetic influences on human disease.
A classic example of this contradiction is myopia. A large body of evidence suggests that myopia is an environment-induced disorder caused by some combination of night lighting, close reading, lack of distance viewing and diet (e.g. Quinn et al. 1999). Moreover, under the influence of Westernisation, genetically unchanged populations, for example, are known to have switched in a single generation from close to 0% to a prevalence of myopia of over 80% (Morgan 2003). And myopia is only one of many examples of diseases with very strong evidence for its environmental origin. In 2009, for example, researchers demonstrated that very moderate improvements in lifestyle could reduce an individual’s probability of contracting type 2 diabetes by 89% (Mozzafarian et al. 2009). The subjects of this study just had to smoke less than the average, keep trim, exercise moderately and not eat too much fat.
In stark contrast, twin studies (which compare the extent of similarity exhibited by identical and non-identical twins) estimate that myopia is a disease with a heritability (called h2) of about 0.8 (out of a possible 1.0), indicating that for myopia genetic causes dominate environmental ones. These findings are clearly incompatible with the available epidemiological data on myopia and no satisfactory resolution to them has ever been proposed (e.g. Rose et al. 2002; Morgan 2003). This contradiction, between the results of twin studies and the results of epidemiological and clinical research, is repeated for almost every human disease.
A meaningful resolution to these contradictions is, nevertheless, necessary. Since it is unlikely that the many observations identifying environment as a dominant disease-causing factor are all incorrect, the parsimonious solution to the conundrum, even before the GWA studies were reported, was to propose that heritability studies of twins are inherently mistaken or misinterpreted.
Studies of human twins estimate heritability (h2) by calculating disease incidence in monozygotic (genetically identical) twins versus dizygotic (fraternal) twins (who share 50% of their DNA). If monozygotic twin pairs share disorders more frequently than do dizygotic twins, it is presumed that a genetic factor must be involved. A problem arises, however, when the number resulting from this calculation is considered to be an estimate of the relative contribution of genes and environment over the whole population (and environment) from which the twins were selected. This is because the measurements are done in a series of pairwise comparisons, meaning that only the variation within each twin pair is actually being measured. Consequently, the method implicitly defines as environment only the difference within each twin pair. Since each twin pair normally shares location, parenting styles, food, schooling, etc., much of the environmental variability that exists between individuals in the wider population is de facto excluded from the analysis. In other words, heritability (h2), when calculated this way, fails to adequately incorporate environmental variation and inflates the relative importance of genes.
Heritability studies of humans are classic experiments that have been conducted many times and they have strong defenders among modern geneticists (e.g. Visscher et al. 2008). Nevertheless, criticisms such as those above are not novel. They are a specific example of the general problem, formulated by Richard Lewontin (of Harvard University), that the contributions of genes to a trait normally depend on the particular environment. And further, that susceptibility to environment depends on genes. In consequence, there can be no universal constant (such as h2) that defines their relationship to one another (Lewontin, Rose and Kamin 1984; Lewontin 1993). Lewontin is not alone among geneticists in his dismissal of heritability as it is used in human genetics. Martin Bobrow of Cambridge University, for example, has called human heritability “a poisonous concept” and “almost uninterpretable”6.
If one accepts either that h2 is consistently inflated, or that it is essentially meaningless, even “poisonous”, then the only current evidence supporting genetic susceptibility as a major cause of disease disappears. “The Missing Heritability of Complex Diseases”, DNAs’ so-called ‘dark matter’, becomes simply an artefact arising from overinterpretation of twin studies.
A mutually convenient untruth
Genetic determinist ideas, especially in the form of explanations for health and disease, are powerful forces in our society (Lewontin 1993). Their pervasive influence, however, requires some explanation because the purely scientific evidence for genetic causation has always been weak, since it depended heavily on disputed heritability studies. To understand the significance a repudiation of inherited DNA as a disease explanation has, it is first necessary to understand the role genetic determinism plays in consolidating the social order.
Politicians like genetic determinism as a theory of disease because it substantially reduces their responsibility for people’s ill-health. By shifting blame towards individuals and their genetic ‘predispositions’ it greatly dilutes the pressure they may feel to regulate, ban, or tax harmful products and contaminants, courses of action that typically offend their business constituents. For a politician, therefore, spending tax dollars on medical genetics is an easy and even popular decision.
Corporations like genetic determinism, again because it shifts blame. The Salt Institute website, for example, currently maintains that diseases linked to salt reflect the existence of a small number of highly predisposed individuals. This assertion, sandwiched (on the website) between other questions about salt and health, is clearly intended to undermine efforts to restrict salt in the diet. For the same reason, the tobacco industry has for many years encouraged research into the genetics of nicotine addiction (Gundle et al. 2010). This same reasoning, that disease is the fault of the victim’s genes, also protects corporate defendants from after-the-fact liability. If lung cancer patients, for example, suffer from even the possibility of a genetic predisposition, suing tobacco companies is very much harder than it would be otherwise (Tokuhata and Lilienfeld, 1963). There is evidence, too, that genetic determinism influences decisions well before the full facts are known. At least sometimes, it can even encourage the vendor knowingly to place on the market products with harmful effects (Gundle et al. 2010).
Medical researchers are also partial to genetic determinism. They have noticed that whenever they focus on genetic causation, they can raise research dollars with relative ease. The last fifteen years, coinciding with the rise of medical genetics, have seen unprecedented sums of money directed at medical research. At the same time, research on pollution, nutrition and epidemiology has not benefited in any comparable way. It is hard not to conclude that this funding disparity is strongly influenced by the fit of genetics to the needs of businesses and politicians. In the words of Homer Simpson, “It takes two to lie, Marge. One to lie and one to listen”.
Recognising their value, these groups have tended to elevate genetic explanations for disease to the status of unquestioned scientific facts, thus making their dominance of official discussions of health and disease seem natural and logical. This same mindset is accurately reflected in the media where even strong environmental links to disease often receive little attention, while speculative genetic associations can be front page news. It is astonishing to think that all this has occurred in spite of the reality that genes for common diseases were essentially hypothetical entities.
Mutually convenient or not, by the criteria normally applied in science, the hypothesis that genes are significant causes of common diseases stands refuted. The history of scientific refutation, however, is that adherents of established theories construct ever more elaborate or unlikely explanations to fend off their critics (Ziman 2000). The invocation of genetic ‘dark matter’ and the search for ‘hiding’ genetic variation shows that the process of special pleading is already well underway (e.g. Manolio et al. 2009; Eichler et al. 2010). Implausible though the suggested hiding places seem, it is nevertheless going to be difficult to rule them all out in the near future. Consequently, those geneticists wishing to do so will have the opportunity to obfuscate for some while yet.
Needed: A declaration of dependence
In societies, including our own, much of the social fabric is arranged around our conception of the ‘proper’ place of death and disease. Confidence in the genetic paradigm has led us to explain non-infectious disease as primarily a natural manifestation of genetic predispositions and thus a normal outcome of aging. This normalisation of diseases has obscured the contrary evidence that these same diseases can be all but absent in other cultures and often were rare in historical times. With the GWA results confirming the epidemiological studies, however, we are confronted with the necessity of constructing a new narrative. To be consistent with the facts, this new narrative must incorporate Western diseases not as unavoidable, but as indicators of human fragility in the face of industrialisation and modern life.
That we are so vulnerable to our social and physical surroundings, is an uncompromising message. But to the very best of our scientific knowledge it is the truth. Fortunately, it is a truth that offers hope. If we can change our environment for the worse, we can also change it for the better. And if a magic medical cure-all pill is not going to materialise after all, it may be that it wasn’t needed in the first place.
Change for better health can occur in part through individual effort. The new understanding implies that we are not fated to develop any of the common diseases and that the efforts we make to eat well and live a healthy life will be amply rewarded. We should not be surprised if specific lifestyle changes can reverse decades of disease progression (Esselstyn et al. 1995). Or that Seventh Day Adventists, who are non-smoking, non-drinking vegetarians, live on average to 88, eight years beyond the average American’s life expectancy (Fraser and Shavlik 2001). These examples suggest what can be achieved with relatively modest lifestyle changes. By focusing more exclusively on health-related lifestyle modifications than even Seventh Day Adventists do, we could probably extend our life expectancy still further. Exactly how much further is now a much more interesting question than we previously thought.
For most people, life expectancy is only truly of value if it is accompanied by life quality. We should expect, however, any future diminution of the burden of degenerative diseases from lifestyle modification to both extend life expectancy and enhance life quality7. If so, it might make the most common end-of-life experience very different from the actual prospect facing most Westerners for whom old age is commonly a process of ever more aggressive medical intervention culminating in a hospital room attached to drips and electrodes.
While individual effort has a place, many positive lifestyle and social changes require the cooperation of the state. Nevertheless, most governments cooperate far more, for example, with their food industries than with those who wish to eat a healthy diet. The laying to rest of genetic determinism for disease, however, provides an opportunity to shift this cynical political calculus. It raises the stakes by confronting policy-makers as never before with the fact that they have every opportunity, through promoting food labeling, taxing junk food, or funding unbiased research, to help their electorates make enormously positive lifestyle choices. And, when their constituents realise that current policies are robbing every one of them of perhaps whole decades of healthy living, these citizens might start to apply the necessary political pressure.
Addendum:
Following publication of ‘The Great DNA Data Deficit’ various readers have contacted us with relevant publications and books of which we were unaware. These important contributions to the issue of whether genes might cause disease extend or otherwise support the discussion considerably. They are listed below in chronological order. Our sincere thanks to readers for sending these in:
(1) ‘Genes for’ disease is shorthand for genetic variants predisposing the carrier to disease.
(2) The definition of a genetically rare disease is usually that it affects fewer than 1 in 1,000 people. Approximately 6,000 rare diseases have been identified in humans.
(3) The famous alleles BRCA 1 and 2 are important in some families and populations but otherwise are fairly rare.
(4) According to the World Health Organisation, heart disease causes 17.1% of all deaths worldwide. Cancer causes 15% of all deaths. Stroke causes 10% of all deaths. WHO factsheet.
(5) The explanation of the contradiction between the GWA studies and the newspaper reports is that much of this coverage was hype. Almost without exception these newspaper reports covered discoveries whose significance could be questioned. Typically, they concerned unsubstantiated results, or the genes were for very minor diseases or the medical and genetic implications of the discovery were substantially overplayed.
(6) Why geneticists disagree about heritability has a historical context that usefully illuminates this issue. Once upon a time the term heritability was used differently. When Sewall Wright, one of the founders of genetics, developed the concept of heritability, he titled a key paper “The Relative Importance of Heredity and Environment in Determining the Piebald Pattern of Guinea Pigs” (Wright, 1920). He used this title even though all animals in the study were kept in identical conditions. Clearly, therefore, he wasn’t defining ‘environment’ as we now do. Instead, in that paper he explicitly defined environment as “the irregularities of development due to the intangible sorts of causes to which the word chance is applied”. All of the subsequent questions (like Lewontin’s) surrounding the validity of twin studies have arisen precisely because Wright’s method, which defined heritability in opposition to chance variations in development, was extended to populations of humans living in variable and varying environments.
(7) In popular speech, aging and degeneration are often conflated, leading sometimes to a rejection of health advice as simply life-span extension. However, aging, by definition, is simply the passage of time and research shows that typically, extended life expectancy is correlated with improved health, when age is taken into account (Fraser and Shavlik 2001). In case one is tempted to confuse aging and disease, it may be helpful to think of children. For them, aging is a process of becoming stronger.
References
Barrett J. et al. (2008) Genome-wide association defines more than 30 different susceptibility loci for Crohn’s disease. Nature Genet. 40: 955-962.
Barrett J. et al. (2009) Genome-wide association study and meta-analysis find that over 40 different loci affect risk of type 1 diabetes. Nature Genet. 41: 703-707.
Campbell T.C. and Campbell T.M. (2008) The China Study. Benbella books, Inc. USA
Collins F.S. (2009) The Language of Life. Harper, NY, USA.
Conrad D.F. et al. (2010) Origins and functional impact of copy number variation in the human genome. Nature 464: 704-712.
Dermitzakis E.T. and Clark A.G. (2009) Life after GWA studies. Science 326: 239-240.
Gundle K. et al. (2010) ‘To prove this is industry’s best hope’: big tobacco’s support of research on the genetics of nicotine addiction. Addiction 105: 974-983.
Eichler, E. et al. (2010) Missing heritability and strategies for finding the underlying causes of complex disease. Nature Genetics 11: 446.
Essselstyn C.S., Ellis S.G. Medendorp S.V., Crowe T.D. (1995) A strategy to arrest and reverse coronary artery disease: a 5-year longitudinal study of a single physician’s practice. J. Family Practice 41: 560-568.
Fraser G.S. and Shavlik D.J. (2001) Ten Years of Life: Is It a Matter of Choice? Arch. Int. Medicine 161: 1645-52.
Hindorff L. et al. (2009) Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl. Acad. Sci 106: 9362-67.
Ioannidis, JPA, Castaldi P. and Evangelou E. (2010) A Compendium of Genome-Wide Associations for Cancer: Critical Synopsis and Reappraisal. J. of the National Cancer Institute 102: 846-858.
Lewontin R.C., Rose S. and Kamin L.J. (1984) Not in our genes. Pantheon books New York, USA.
Lewontin R.C. (1993) Biology as ideology Penguin books New York, USA.
Maller J. et al. (2006) Common variation in three genes, including a non-coding variant in CFH, strongly influences risk of age-related macular degeneration. Nature Genet. 38: 1055-59.
Manolio T. et al. (2009) Finding the missing heritability of complex diseases. Nature 461: 747-753.
Miki Y. et al. (1994) A strong candidate for the breast and ovarian cancer susceptibility gene BRCA1. Science: 266 66-71.
Mozaffarian D., Kamineni A., Carnethon M., Djoussé L., Mukamal K.J., and Siscovick D. (2009) Lifestyle Risk Factors and New-Onset Diabetes Mellitus in Older Adults. Arch. Int. Med. 169: 798–807.
Morgan I. (2003) The biological basis of myopic refractive error. Clinical and Experimental Optometry 86: 276-288.
Petronis A. (2010) Epigenetics as a unifying principle in the aetiology of complex traits and diseases. Nature 465: 721-727.
Quinn G.E. et al. (1999) Myopia and ambient lighting at night. Nature 399: 113-14.
Reiman E. et al. (1996) Preclinical Evidence of Alzheimer’s Disease in Persons Homozygous for the 4 Allele for Apolipoprotein E. New England Journal of Medicine 334: 752-758.
Rose K.A. Morgan I.G. Smith W. Mitchell P. (2002) High heritability of myopia does not preclude rapid changes in prevalence. Clinical and Experimental Opthalmology 30: 168-172.
Tokuhata G.K. and Lilienfeld A.M. (1963) Familial aggregation of lung cancer in humans. J. of the National Cancer Institute 30: 289-312.
Visscher P.M. et al. (2008) Heritability in the genomics era-concepts and misconceptions. Nature Rev. Genetics 9:255-266.
The Wellcome Trust Consortium (2010) Genome-wide association study of CNV in 16,000 cases of eight common diseases and 3,000 shared controls. Nature 464: 713-720.
Wright S. (1920) The Relative Importance of Heredity and Environment in Determining the Piebald Pattern of Guinea Pigs. Proc. Natl. Acad. Sci 6:320-332.
Ziman J. (2000) Real Science: What It Is and What It Means. Cambridge University Press. UK
According to conventional wisdom, the Brazilian city of Belo Horizonte (pop. 2.5 million) has achieved something impossible. So, too, has the island of Cuba. They are feeding their hungry populations largely with local, low-input farming methods that enhance the environment rather than degrade it. They have achieved this, moreover, at a time of rising food prices when others have mostly retreated from their own food security goals.
The conventional wisdom contradicted by these examples is that high yielding agricultural systems necessarily reduce biodiversity.
Sometimes this assumption is extended to become the ‘Borlaug hypothesis’ after Norman Borlaug, the architect of the green revolution. The Borlaug hypothesis states that the preservation of rainforests, an example of biodiversity, depends on intensive industrial production of sufficient food to allow for the luxury of unfarmed areas (e.g. Trewavas, 1999).
So, since Belo Horizonte and Cuba appear to have defied this logic, what is their secret? Are they succeeding in spite of their commitment to sustainability, or because of it? Or is conventional wisdom simply wrong? These pressing questions are explored in a new review, Food security and biodiversity: can we have both? by Michael Jahi Chappell and Liliana Lavalle, and published in the journal Agriculture and Human Values.
A pathbreaking new approach
Whether agricultural productivity and biodiversity are mutually exclusive has only recently emerged as a central question in agriculture. It follows increasing awareness, both that global biodiversity is in rapid decline, and that much of the decline is a result of industrialised agriculture. This is evident from data as diverse as increases in the number and size of ocean dead zones to declines in pollinators (Cameron et al 2011).
However, as the number of those who go hungry swells, countries and development advocates see themselves as faced with seemingly impossible choices between food security and environmental degradation. Such pressures, together with the acknowledgment that the productivity of industrialised agriculture can be short-lived, have stimulated academics and others to reexamine their thinking (e.g. Tscharntke et al 2011).
Perhaps the best-known attempt to rigorously evaluate the biodiversity versus food question was the International Assessment of Agricultural Knowledge, Science Technology and Development (IAASTD). This United Nations-sponsored commission was set up to resolve the competing ways forward being offered for agriculture. Reporting in 2007, the IAASTD commission left its mark mainly by pointing out that it is a mistake to think of agriculture as simply about productivity. Agriculture provides employment and livelihoods, it underpins food quality, food safety and nutrition, and it allows food choices and cultural diversity. It is also necessary for water quality, broader ecosystem health, and even carbon sequestration. Agriculture, concluded the IAASTD, should never be reduced merely to a question of production. It must necessarily be integrated with the many needs of humans and ecosystems.
According to John Vandermeer of the University of Michigan, the IAASTD report “did conclude that food security and biodiversity could be reconciled”. Amidst discussion of many other issues that conclusion, however, was largely lost. What Chappell and LaValle have contributed, he says, is to focus specifically on the question of whether biodiversity and food security can co-exist in the same place. “They have brought together the data that can resolve the contradictions contained in both sides of the biodiversity versus food argument”, he says. Helda Morales, Professor of Agroecology at El Colegio de la Frontera Sur, Mexico, agrees. “This is a careful review of the relevant information available on biodiversity and food security.”
Sustainable agriculture and productivity
Yields are the first issue Chappell and LaValle considered. Surveying the scientific evidence, they find it supports the idea that a ‘hypothetical world alternative agriculture system’ could adequately provide for present or even predicted future populations. This is primarily because present and future populations do not need more food than we currently produce. But it is also because agroecological methods involve only a minor yield loss compared with the best that industrial agriculture has to offer. Indeed small farms, which they believe will have to be the basis of any future sustainable agriculture, typically yield more than larger ones. Both conclusions are accepted by Teja Tscharntke, Professor of Agroecology at Georg-August University in Goettingen, Germany. “Hunger in the developing countries can only be reduced by helping smallholders,” he says, and even in Germany, “organic farming would easily feed the population if nutritional recommendations were followed”.
Sustainable agriculture and biodiversity
On the question of whether agroecological methods also enhance biodiversity, the answers appear even more clear cut. While industrialised agriculture is often considered the biggest single global contributor to extinction, biodiversity of every kind is enhanced on farms that avoid industrial methods compared with farms that do not. A recent meta-analysis cited by the review put this figure at “30% more species and 50% more individuals” on agroecological farms. Chappell and LaValle found that smaller farms using agroecological methods are more biodiverse and less harmful to the environment generally. This finding was consistent over a wide range of localities, crops and production systems. Probably that is because multiple aspects of industrialised agriculture, from large field sizes to the use of nitrogenous fertilisers and pesticides, are each associated with biodiversity losses.
Embedded agriculture
Agriculture is a system that functions within bigger ecological, political and economic systems. Success, therefore, must ultimately be judged at that level. Chappell and LaValle consider that the two examples they studied—Belo Horizonte and Cuba—offer tentative evidence of success at a regional level. Of these two, Cuba’s commitment (and also success) appears to have been the greater. It is claimed, for example that the “capital city of Havana is now almost entirely supplied by alternative agriculture, in or on the periphery of, the city itself”. They acknowledge, however, that two examples do not prove anything except a principle. As Teja Scharntke puts it “such examples may be models for some but not all countries.”
Future directions
Nevertheless, say Chappell and LaValle, this all points to the conclusion that “the best solution to both food security and biodiversity problems would be widespread conversion to alternative practices.” Instead of supporting a competitive relationship “the evidence emphasizes the interdependence of biodiversity and agriculture.” Helda Morales goes even further “I would go beyond this statement and say that we cannot have food security if we do not have biodiversity”.
For John Vandermeer, the uniquely holistic approach of Chappell and LaValle is the key to a consensus. “When people dispute these conclusions, it is almost invariably because they are using too narrow a frame of reference.” And it is a consensus that appears to be gaining wider attention. In December of 2010 The United Nations special rapporteur on the right to food published a document asserting that agroecology had demonstrated “proven results” and that “the scaling up of these experiences is the main challenge today.”
The immediate practical obstacle, however, to choosing a food system that supports both food security and the environment is public policy. Citing Per Pinstrup-Andersen, the former Director General of the International Food Policy Research Institute, Chappell and LaValle state: “It is a myth that the eradication of food insecurity is truly treated as a high priority.” The real obstacles to ecological high-yield farming, Vandermeer believes, are research priorities and economics. “Industrial farming only appears to be more viable because it is subsidised.” Even though there are at present some uncertainties, “If we applied the same research efforts to agroecological approaches that we currently do to support industrialised farming, even more could be achieved.”
References
Cameron SA, Lozier JD, Strange JP, Koch JB, Cordes N, Solter LF, Griswold TL (2011) Patterns of widespread decline in North American bumble bees. Proc. Natl. Acad. Sci. USA 108: 662-667.
Chappell MJ and LaValle LA (2011) Food security and biodiversity: can we have both? Agriculture and Human Values 28: 3-26. Author Copy
Pinstrup-Andersen P (2003) Global Food Security: Facts, Myths and Policy Needs. IFA-FAO Agriculture Conference.
Trewavas A (1999) Much food, many problems. Nature 402: 231-2 Tscharntke T et al. (2011) Multifunctional shade-tree management in tropical agroforestry landscapes – a review. J. Applied Ecology. in press.
Online gene testing company 23andMe last week published its first genetic research study into Parkinson’s Disease. The study was funded by the participants (many of whom are customers of 23andMe), the company itself, and Google-founder Sergei Brin, who is married to 23andMe’s CEO and founder, Anne Wojcicki (1). Wojcicki is also personally subsidising the company and is a co-author on the paper.
23andMe’s study shows that two new genes it has discovered, plus all known existing genes linked to the disease, are not much better than random selection for predicting who will get Parkinson’s Disease.
According to the new research, predictive computer models including all known genes can only account for 6%-7% of the variance of the disease (2). This means that 93-94% of the explanation for differences in people’s likelihood of developing Parkinson’s Disease is missing. Further, most of the missing explanation for these differences in risk is not genetic.
“Most diseases in most people are not predictable from people’s genes,” said Dr Helen Wallace of GeneWatch UK “23andMe should start being more honest with its customers and admit most gene tests that it sells online are meaningless. Both its investors and its customers need to know that genetic predictions will always have fundamental limitations. These findings show that 23andMe’s product can only ever get a little bit less useless as more and more research is done.”
23andMe’s rival company, DeCode Genetics (based in Iceland) went bankrupt in 2009, although it continues to operate as a private company (3). A recent study presented at the European Society of Human Genetics concluded that both companies sell inaccurate predictions of disease risks to their customers (4).
23andMe’s paper includes a new estimate that the heritability of Parkinson’s Disease is 23% (suggesting that 77% of the variance will never be explained by genes, but that missing genes yet to be discovered account for a further 16%). Even if this estimate is correct, it is much less than the calculated heritability of type 1 diabetes, a disease for which scientists have already shown genetic tests have poor predictive power. Even if all potential undiscovered genetic factors are added in this will remain the case (5). The authors of the 23andMe paper therefore predict that even if all the genes they think might exist were found there would be an upper bound on the predictive value which is insufficient to make genetic screening for Parkinson’s Disease risk useful in the general population (6).
(1) The findings, funding and competing interests are reported in the paper: Do et al. (2011) Web-based genome-wide association study identifies two novel loci and a substantial genetic component for Parkinson’s Disease. PLoS Genetics, 7(6), e1002141. On: http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1002141
(2) Sergei Brin himself has a rare gene linked with a rare familial (i.e. largely inherited) form of Parkinson’s Disease which occurs mainly in Jewish families. However, most cases of Parkinson’s disease are not familial.
(3) http://www.independent.co.uk/life-style/health-and-families/health-news/firm-that-led-the-way-in-dna-testing-goes-bust-1822413.html
(4) Direct-To-Consumer Genetic Tests Neither Accurate in Their Predictions nor Beneficial to Individuals, Study Suggests. 31st May 2011. http://www.sciencedaily.com/releases/2011/05/110530190344.htm
(5) Clayton, DG (2009) Prediction and Interaction in Complex Disease Genetics: Experience in Type 1 Diabetes. PLoS Genetics, 5(7): e1000540. On: http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1000540 “Many authors have recently commented on the modest predictive power of the common disease susceptability loci currently emerging. However, here it is suggested that, for most diseases, this would remain the case even if all relevant loci (including rare variants) were ultimately discovered.”
(6) They report an AUC for their own model of 0.55 to 0.6 and an “upper bound on AUC for a genetic risk prediction model of 0.83 to 0.88” (based on finding future genes to explain their calculated heritability). An AUC of 1 implies perfect predictions, an AUC of 0.5 is no better than random guessing. It has been suggested elsewhere that an AUC of 0.75 is needed before testing people with symptoms and an AUC of 0.99 for screening unsymptomatic people in the general population, because of the large numbers of false positives and false negatives that occur with a lower AUC (i.e. people told they are at high risk when they are not, or told they are at low risk when they are not).
Imagine an international mega-deal. The global organic food industry agrees to support international agribusiness in clearing as much tropical rainforest as they want for farming. In return, agribusiness agrees to farm the now-deforested land using organic methods, and the organic industry encourages its supporters to buy the resulting timber and food under the newly devised “Rainforest Plus” label. There would surely be an international outcry.
Virtually unnoticed, however, even by their own memberships, the world’s biggest wildlife conservation groups have agreed exactly to such a scenario, only in reverse. Led by the World Wide Fund for Nature (WWF), many of the biggest conservation nonprofits including Conservation International and the Nature Conservancy have already agreed to a series of global bargains with international agribusiness. In exchange for vague promises of habitat protection, sustainability and social justice, these conservation groups are offering to greenwash industrial commodity agriculture.
The big conservation nonprofits don’t see it that way of course. According to WWF ‘Vice President for Market Transformation’ Jason Clay, the new conservation strategy arose from two fundamental realizations.
The first was that agriculture and food production are the key drivers of almost every environmental concern. From issues as diverse as habitat destruction to over-use of water, from climate change to ocean dead zones, agriculture and food production are globally the primary culprits. To take one example, 80-90% of all fresh water abstracted by humans is for agriculture (e.g. FAO’s State of the World’s Land and Water report ).
This point was emphasized once again in a recent analysis published in the scientific journal Nature. The lead author of this study was Professor Jonathan Foley (Foley et al 2011). Not only is Foley the director of the University of Minnesota-based Institute on the Environment, but he is also a science board member of the Nature Conservancy.
The second crucial realization for WWF was that forest destroyers typically are not peasants with machetes but national and international agribusinesses with bulldozers. It is the latter who deforest tens of thousands of acres at a time. Land clearance on this scale is an ecological disaster, but Claire Robinson of Earth Open Source points out it is also “incredibly socially destructive”, as peasants are driven off their land and communities are destroyed. According to the UN Permanent Forum on Indigenous Issues 60 million people worldwide risk losing their land and means of subsistence from palm plantations.
By about 2004, WWF had come to appreciate the true impacts of industrial agriculture. Instead of informing their membership and initiating protests and boycotts, however, they embarked on a partnership strategy they call ‘market transformation’.
Market Transformation
With WWF leading the way, the conservation nonprofits have negotiated approval schemes for “Responsible” and “Sustainable” farmed commodity crops. According to Clay, the plan is to have agribusinesses sign up to reduce the 4-6 most serious negative impacts of each commodity crop by 70-80%. And if enough growers and suppliers sign up, then the Indonesian rainforests or the Brazilian Cerrado will be saved.
The ambition of market transformation is on a grand scale. There are schemes for palm oil (the Roundtable on Sustainable Palm Oil; RSPO), soybeans (the Round Table on Responsible Soy; RTRS), biofuels (the Roundtable on Sustainable Biofuels), Sugar (Bonsucro) and also for cotton, shrimp, cocoa and farmed salmon. These are markets each worth many billions of dollars annually and the intention is for these new responsible and sustainable certified products to dominate them.
The reward for producers and supermarkets will be that, reinforced on every shopping trip, “Responsible” and “Sustainable” logos and marketing can be expected to have major effects on public perception of the global food supply chain. And the ultimate goal is that, if these schemes are successful, human rights, critical habitats, and global sustainability will receive a huge and globally significant boost.
The role of WWF and other nonprofits in these schemes is to offer their knowledge to negotiate standards, to provide credibility, and to lubricate entry of certified products into international markets. On its UK website, for example, WWF offers its members the chance to “Save the Cerrado” by emailing supermarkets to buy “Responsible Soy”. What WWF argues will be a major leap forward in environmental and social responsibility has already started. “Sustainable” and “Responsible” products are already entering global supply chains.
Reputational Risk
For conservation nonprofits these plans entail risk, one of which is simple guilt by association. The Round Table on Responsible Soy (RTRS) scheme is typical of these certification schemes. Its membership includes WWF, Conservation International, Fauna and Flora International, the Nature Conservancy, and other prominent nonprofits. Corporate members include repeatedly vilified members of the industrial food chain. As of January 2012, there are 102 members, including Monsanto, Cargill, ADM, Nestle, BP, and UK supermarket ASDA.
That is not the only risk. Membership in the scheme, which includes signatures on press-releases and sometimes on labels, indicates approval for activities that are widely opposed. The RTRS, for example, certifies soybeans grown in large-scale chemical-intensive monocultures. They are usually GMOs. They are mostly fed to animals. And they originate from countries with hungry populations. When 52% of Americans think GMOs are unsafe and 93% think GMOs ought to be labeled, for example, this is a risk most organizations dependent on their reputations probably would not consider.
The remedy for such reputational risk is high standards, rigorous certification and watertight traceability procedures. Only credibility at every step can deflect the seemingly obvious suspicion that the conservation nonprofits have been hoodwinked or have somehow ‘sold out’.
So, which one is it? Are “Responsible” and “Sustainable” certifications indicative of a genuine strategic success by WWF and its fellows, or are the schemes nothing more than business as usual with industrial scale greenwashing and a social justice varnish?
Low and Ambiguous Standards
The first place to look is the standards themselves. RTRS standards (version 1, June 2010), to continue with the example of soybeans, cover five ‘principles’. Principle 1 is: Legal Compliance and Good Business Practices. Principle 2 is: Responsible Labour Conditions. Principle 3 is: Responsible Community Relations. Principle 4 is Environmental Responsibility. Principle 5 is Good Agricultural Practice.
Language typical of the standards includes, under Principle 2, Responsible Labour Conditions, section 2.1.1 “No forced, compulsory, bonded, trafficked. or otherwise involuntary labor is used at any stage of production”, while section 2.4.4 states “Workers are not hindered from interacting with external parties outside working hours.”
Under Principle 3: Responsible Community Relations, section 3.3.3 states: “Any complaints and grievances received are dealt with in a timely manner.”
Under Principle 4: Environmental Responsibility, section 4.2 states “Pollution is minimized and production waste is managed responsibly” and section 4.4 states “Expansion of soy cultivation is responsible”.
Under Principle 5: Good Agricultural Practice, Section 5.9 states “Appropriate measures are implemented to prevent the drift of agrochemicals to neighboring areas.”
These samples illustrate the tone of the RTRS principles and guidance.
There are two ways to read these standards. The generous interpretation is to recognize that the sentiments expressed are higher than what are actually practiced in many countries where soybeans are grown, in that the standards broadly follow common practice in Europe or North America. Nevertheless, they are far lower than organic or fairtrade standards; for example they don’t require crop rotation, or prohibit pesticides. Even a generous reading also needs to acknowledge the crucial point that adherence to similar requirements in Europe and North America has contaminated wells, depleted aquifers, degraded rivers, eroded the soil, polluted the oceans, driven species to extinction and depopulated the countryside—to mention only a few well-documented downsides.
There is also a less generous interpretation of the standards. Much of the content is either in the form of statements, or it is merely advice. Thus section 4.2 reads “Pollution is minimized and production waste is managed responsibly.” Imperatives, such as: must, may never, will, etc., are mostly lacking from the document. Worse, key terms such as “pollution”, “minimized”, “responsible” and “timely” (see above) are left undefined. This chronic vagueness means that both certifiers and producers possess effectively infinite latitude to implement or judge the standards. They could never be enforced, in or out of court.
Dubious Verification and Enforcement
Unfortunately, the flaws of RTRS certification do not end there. They include the use of an internal verification system. The RTRS uses professional certifiers, but only those who are members of RTRS. This means that the conservation nonprofits are relying on third parties for compliance information. It also means that only RTRS members can judge whether a principle was adhered to. And even if they consider it was not, there is nothing they can do, since the RTRS has no legal status or sanctions.
The ‘culture’ of deforestation is also important to the standards. Rainforest clearance is often questionably legal, or actively illegal, and usually requires removing existing occupants from the land. It is a world of private armies and bribery. This operating environment makes very relevant the irony under which RTRS members, under Principle 1, volunteer to obey the law. The concept of volunteering to obey the law begs more than a few questions. If an organization is not already obeying the law, what makes WWF suppose that a voluntary code of conduct will persuade it? And does obeying the law meaningfully contribute to a marketing campaign based on responsibility?
Of equal concern is the absence of a clear certification trail. Under the “Mass Balance” system offered by RTRS, soybeans (or derived products) can be sold as “Responsible” that were never grown under the system. Mass Balance means vendors can transfer the certification quantity purchased, to non-RTRS soybeans. Such an opportunity raises the inherent difficulties of traceability and verification to new levels.
How Will Certification Save Wild Habitats?
A key stated goal of WWF is to halt deforestation through the use of maps identifying priority habitat areas that are off-limits to RTRS members. There are crucial questions over these maps, however. Firstly, even though RTRS soybeans are already being traded they have yet to be drawn up. Secondly, the maps are to be drawn up by RTRS members themselves. Thirdly, under the scheme RTRS maps can be periodically redrawn. Fourthly, RTRS members need not certify all of their production acreage. This means they can certify part of their acreage as “Responsible”, but still sell (as “Irresponsible”?) soybeans from formerly virgin habitat. This means WWF’s target for year 2020 of 25% coverage globally and 75% in WWF’s ‘priority areas’ would still allow 25% of the Brazilian soybean harvest to come from newly deforested land. And of course, the scheme cannot prevent non-members, or even non-certified subsidiaries, from specializing in deforestation (1).
These are certification schemes, therefore, with low standards, no methods of enforcement, and enormous loopholes (2). Pete Riley of UK GM Freeze dubs their instigator the “World Wide Fund for naiveté” and believes “the chances of Responsible soy saving the Cerrado are zero.” (3). Claire Robinson agrees: “The RTRS standard will not protect the forests and other sensitive ecosystems. Additionally, it greenwashes soy that’s genetically modified to survive being sprayed with quantities of herbicide that endanger human health and the environment.” There is even a website (www.toxicsoy.org) dedicated to exposing the greenwashing of GMO Soy.
Commodity certification is in many ways a strange departure for conservation nonprofits. In the first place the big conservation nonprofits are more normally active in acquiring and researching wild habitats. Secondly, as membership organizations it is hard to envisage these schemes energizing the membership—how many members of the Nature Conservancy will be pleased to find that their organization has been working with Monsanto to promote GM crops as “Responsible”? Indeed, one can argue that these programs are being actively concealed from their members, donors and the public. From their advertising, their websites, and their educational materials, one would presume that poachers, population growth and ignorance are the chief threats to wildlife in developing countries. It is not true, however, and as Jason Clay and the very existence of these certification schemes make clear, senior management knows it well.
In public, the conservation nonprofits justify market transformation as cooperative; they wish to work with others, not against them. However, they have chosen to work preferentially with powerful and wealthy corporations. Why not cooperate instead with small farmers’ movements, indigenous groups, and already successful standards, such as fairtrade, organic and non-GMO? These are causes that could use the help of big international organizations. Why not, with WWF help, embed into organic standards a rainforest conservation element? Why not cooperate with your membership to create engaged consumer power against habitat destruction, monoculture, and industrial farming? Instead, the new “Responsible” and “Sustainable” standards threaten organic, fairtrade, and local food systems—which are some of the environmental movement’s biggest successes.
One clue to the enthusiasm for ‘market transformation’ may be that financial rewards are available. According to Nina Holland of Corporate Europe Observatory, certification is “now a core business” for WWF. Indeed, WWF and the Dutch nonprofit Solidaridad are currently receiving millions of euros from the Dutch government (under its Sustainable Trade Action Plan) to support these schemes. According to the plan 67 million euros have already been committed, and similar amounts are promised (4).
The Threat From the Food Movement
Commodity certification schemes like RTRS can be seen as an inability of global conservation leadership to work constructively with the ordinary people who live in and around wild areas of the globe; or they can be seen as a disregard for fairtrade and organic labels; or as a lost opportunity to inform and energize members and potential members as to the true causes of habitat destruction; or even as a cynical moneymaking scheme. These are all plausible explanations of the enthusiasm for certification schemes and probably each plays a part. None, however, explains why conservation nonprofits would sign up to schemes whose standards and credibility are so low. Especially when, as never before, agribusiness is under pressure to change its destructive social and environmental practices.
The context of these schemes is that we live at an historic moment. Positive alternatives to industrial agriculture, such as fairtrade, organic agriculture, agroecology and the System of Rice Intensification, have shown they can feed the planet, without destroying it, even with a greater population. Consequently, there is now a substantial international consensus of informed opinion (IAASTD) that industrial agriculture is a principal cause of the current environmental crisis and the chief obstacle to hunger eradication.
This consensus is one of several roots of the international food movement. As a powerful synergism of social justice, environmental, sustainability and food quality concerns, the food movement is a clear threat to the long-term existence of the industrial food system. (Incidentally, this is why big multinationals have been buying up ethical brands.)
Under these circumstances, evading the blame for the environmental devastation of the Amazon, Asia and elsewhere, undermining organic and other genuine certification schemes, and splitting the environmental movement must be a dream come true for members of the industrial food system. A true cynic might surmise that the food industry could hardly have engineered it better had they planned it themselves.
Who Runs Big Conservation?
To guard against such possibilities, nonprofits are required to have boards of directors whose primary legal function is to guard the mission of the organization and to protect its good name. In practice, for conservation nonprofits this means overseeing potential financial conflicts and preventing the organization from lending its name to greenwashing.
So, who are the individuals guarding the mission of global conservation nonprofits? US-WWF boasts (literally) that its new vice-chair was the last CEO of Coca-Cola, Inc. (a member of Bonsucro) and that another board member is Charles O. Holliday Jr., the current chairman of the board of Bank of America, who was formerly CEO of DuPont (owner of Pioneer Hi-Bred International, a major player in the GMO industry). The current chair of the executive board at Conservation International, is Rob Walton, better known as chair of the board of WalMart (which now sells ‘sustainably sourced’ food and owns the supermarket chain ASDA). The boards of WWF and Conservation International do have more than a sprinkling of members with conservation-related careers. But they are heavily outnumbered by business representatives. On the board of Conservation International, for example, are GAP, Intel, Northrop Grumman, JP Morgan, Starbucks and UPS, among others.
At the Nature Conservancy its board of directors has only two members (out of 22) who list an active affiliation to a conservation organization in their board CV (Prof Gretchen Daly and Cristian Samper, head of the US Museum of Natural History). Only one other member even mentions among their qualifications an interest in the subject of conservation. The remaining members are, like Shona Brown, an employee of Google and a board member of Pepsico, or Margaret Whitman who is the current President and CEO of Hewlett-Packard, or Muneer A Satter, a managing director of Goldman Sachs.
So, was market transformation developed with the support of these boards or against their wishes? The latter is hardly likely. The key question then becomes: did these boards in fact instigate market transformation? Did it come from the very top?
Never Ending
Leaving aside whether conservation was ever their true intention, it seems highly unlikely that WWF and its fellow conservation groups will leverage a positive transformation of the food system by bestowing “Sustainable” and “Responsible” standards on agribusiness. Instead, it appears much more likely that, by undermining existing standards and offering worthless standards of their own, habitat destruction and human misery will only increase.
Market transformation, as envisaged by WWF, nevertheless might have worked. However, WWF neglected to consider that successful certification schemes historically have started from the ground up. Organic and fairtrade began with a large base of committed farmers determined to fashion a better food system. Producers willingly signed up to high standards and clear requirements because they believed in them. Indeed, many already were practicing high standards without certification. But when big players in the food industry have tried to climb on board, game the system and manipulate standards, problems have resulted, even with credible standards like fairtrade and organic. At some point big players will probably undermine these standards. They seem already to be well on the way, but if they succeed their efforts will only have proved that certification standards can never be a substitute for trust, commitment and individual integrity.
The only good news in this story is that it contradicts fundamentally the defeatist arguments of the WWF. Old-fashioned activist strategies, of shaming bad practice, boycotting products and encouraging alternatives, do work. The market opportunity presently being exploited by WWF and company resulted from the success of these strategies, not their failure. Multinational corporations, we should conclude, really do fear activists, non-profits, informed consumers, and small producers, when they all work together.
Footnotes
(1) RSPO standards don’t make much use of maps in their Criterion 7 on “Responsible development of new plantings”. Instead, they rely on “Environmental Impact Assessments” and identifying “High Conservation Value” areas. However, these are every bit as questionable as RTRS maps. According to the UN forum on indigenous peoples, loggers frequently use designations of oil palm plantations as an excuse to log. RSPO, in its guidance notes to Criterion 7.3 under “Responsible development of new plantings”, the standard states: “Development should actively seek to utilise previously cleared and/or degraded land.” It is not a secret therefore, that RSPO plantations offer logging as an excuse to expand.
(2) These standards are also strewn with loopholes. Under RTRS standards, for example, members are allowed to justify why they dont meet a particular standard. Also under RTRS, farming principles called Integrated Crop Management are “voluntarily adopted”. Annex 5 of the standards states that: “The table below presents a non-exhaustive list of measures and practices that can be used”, i.e. use is optional. Under Bonsucro standards, on the other hand, members must meet 80% of them.
(3) The US version of WWF still calls itself the World Wildlife Fund.
(4) The role of the Dutch Government in financing and otherwise supporting sustainable certification is important to this story. On Dec 16th 2011 The Dutch Trade ministry announced that Dutch imports of soybeans would be 100% “Responsible” within four years. Dutch WWF, which is coordinating much of the program, is receiving money from the Dutch Government because Holland is a key player in international agriculture. The Dutch government’s sustainable food strategy notes the following: “Although the Netherlands is a small country, it plays a key role in food production and is the second largest exporter of agricultural products in the world, the largest exporter of seed and propagating material and breeding animals and internationally it is a prominent centre of knowledge.”
A second important Dutch consideration is that Rotterdam is the largest destination for importation of produce and commodity crops into Europe.
References
Foley, J et al (2011) Solutions for a Cultivated Planet Nature 478: 337–342
The world record yield for paddy rice production is not held by an agricultural research station or by a large-scale farmer from the United States, but by Sumant Kumar who has a farm of just two hectares in Darveshpura village in the state of Bihar in Northern India. His record yield of 22.4 tons per hectare, from a one-acre plot, was achieved with what is known as the System of Rice Intensification (SRI). To put his achievement in perspective, the average paddy yield worldwide is about 4 tons per hectare. Even with the use of fertilizer, average yields are usually not more than 8 tons.
Sumant Kumar’s success was not a fluke. Four of his neighbors, using SRI methods, and all for the first time, matched or exceeded the previous world record from China, 19 tons per hectare. Moreover, they used only modest amounts of inorganic fertilizer and did not need chemical crop protection.
Using SRI methods, smallholding farmers in many countries are starting to get higher yields and greater productivity from their land, labor, seeds, water and capital, with their crops showing more resilience to the hazards of climate change (Thakur et al 2009; Zhao et al 2009).
These productivity gains have been achieved simply by changing the ways that farmers manage their plants, soil, water and nutrients.
The effect is to get crop plants to grow larger, healthier, longer-lived root systems, accompanied by increases in the abundance, diversity and activity of soil organisms. These organisms constitute a beneficial microbiome for plants that enhances their growth and health in ways similar to how the human microbiome benefits Homo sapiens.
That altered management practices can induce more productive, resilient phenotypes from existing rice plant genotypes has been seen in over 50 countries. The reasons for this improvement are not all known, but there is a growing literature that helps account for the improvements observed in yield and health for rice crops using SRI.
The ideas and practices that constitute SRI were developed inductively in Madagascar some 30 years ago for rice. They are now being adapted to improve the productivity of a wide variety of other crops, starting with wheat, finger millet and sugarcane. Producing more output with fewer external inputs may sound improbable, but it derives from a shift in emphasis from improving plant genetic potential via plant breeding, to providing optimal environments for crop growth.
The adaptation of SRI experience and principles to other crops is being referred to generically as the System of Crop Intensification (SCI), encompassing variants for wheat (SWI), maize (SMI), finger millet (SFMI), sugarcane (SSI), mustard (rapeseed/canola)(another SMI), teff (STI), legumes such as pigeon peas, lentils and soya beans, and vegetables such as tomatoes, chillies and eggplant.
That similar results are seen across such a range of plants suggests some generic processes may be involved, and these practices are not only good for growing rice. This suggests to Prof. Norman Uphoff and colleagues within the SRI network that more attention should be given to the contributions that are made to agricultural production by the soil biota, both in the plants’ rhizospheres but also as symbiotic endophytes within the plants themselves (Uphoff et al. 2012).
The evidence reported below has drawn heavily, with permission, from a report that Dr. Uphoff prepared on the extension of SRI to other crops (Uphoff 2012). Much more research and evaluation needs to be done on this progression to satisfy both scientists and practitioners. But this gives an idea of what kinds of advances in agricultural knowledge and practice appear to be emerging.
Origins and Principles
Deriving from empirical work started in the 1960s in Madagascar by a French priest, Fr. Henri de Laulanié, S.J., the System of Rice Intensification (SRI) has shown remarkable capacity to raise smallholders’ rice productivity under a wide variety of conditions around the world: from tropical rainforest regions of Indonesia, to mountainous regions in northeastern Afghanistan, to fertile river basins in India and Pakistan, to arid conditions of Timbuktu on the edge of the Sahara Desert in Mali. SRI methods have proved adaptable to a wide range of agroecological settings.
With SRI management, paddy yields are usually increased by 50-100%, but sometimes by even more, even up to the super-yields of Sumant Kumar and his neighbors. Requirements for seed are greatly reduced (by 80-90%), as are those for irrigation water (by 25-50%). Little or no inorganic fertilizer is required if sufficient organic matter can be provided to the soil, and there is little if any need for agrochemical crop protection against pests and diseases. SRI plants are also generally healthier and better able to resist such stresses as well as drought, extremes of temperature, flooding, and storm damage.
SRI methodology is based on four main principles that interact in synergistic ways:
Establish healthy plants early and carefully, nurturing their root potential.
Reduce plant populations, giving each plant more room to grow above and below ground and room to capture sunlight and obtain nutrients.
Enrich the soil with organic matter, keeping it well-aerated to support better growth of roots and more aerobic soil biota.
Apply water purposefully in ways that favor plant-root and soil-microbial growth, avoiding flooded (anaerobic) soil conditions.
These principles are translated into a number of irrigated rice cultivation practices which under most smallholder farmers’ conditions are the following:
Plant young seedlings carefully and singly, giving them wider spacing usually in a square pattern, so that both roots and canopy have ample room to spread.
Keep the soil moist but not inundated. Provide sufficient water for plant roots and beneficial soil organisms to grow, but not so much as to suffocate or suppress either, e.g., through alternate wetting and drying, or through small but regular applications.
Add as much compost, mulch or other organic matter to the soil as possible, ‘feeding the soil’ so that the soil can, in turn, ‘feed the plant.’
Control weeds with mechanical methods that can incorporate weeds while breaking up the soil’s surface. This actively aerates the root zone as a beneficial by-product of weed control. This practice can promote root growth and the abundance of beneficial soil organisms, adding to yield.
The cumulative result of these practices is to induce the growth of more productive and healthier plants (phenotypes) from any given variety (genotype).
Variants of SRI practices suitable for upland regions have been developed by farmers where there are no irrigation facilities, so SRI is not just for irrigated rice production any more. In both settings, crops can be productive with less irrigation water or rainfall because taking up SRI recommendations enhances the capacity of soil systems to absorb and provide water (‘green water’). SRI practices initially developed to benefit small-scale rice growers are being adapted now for larger-scale production, with methods such as direct-seeding instead of transplanting, and with the mechanization of some labor-intensive operations such as weeding (Sharif 2011).
From the System of Rice Intensification to the System of Crop Intensification
Once the principles of SRI became understood by farmers and they had mastered its practices for rice, farmers began extending SRI ideas and methods to other crops. NGOs and some scientists have also become interested in and supportive of this extrapolation, so a novel process of innovation has ensued. Some results of this process are summarized here.
The following information is not a research report. The comparisons below are not experiment station data but rather results that have come from farmers’ fields in Asia and Africa. The measurements of yields reported here probably have some margin of error. But the differences seen are so large and are so often repeated that they are certainly significant agronomically. The results in the following sections are comparisons with farmers’ current practices, showing how much more production farmers in developing countries could be achieving from their presently available resources.
This innovative management of many crops, referred to under the broad heading of System of Crop Intensification (SCI), is also sometimes aptly referred to in India as the ‘System of Root Intensification,’ another meaning for the acronym SRI.
The changes introduced with SCI practice are driven by the four SRI principles noted above. The first three principles are usually followed fairly closely. The fourth principle (reduced water application) is relevant for irrigated production such as for wheat, sugarcane and some other crops. It has less relevance under rainfed conditions where farmers have less control over water applications to their crops. Maintaining sufficient but never excessive soil moisture such as with water-harvesting methods and applications corresponds to the fourth SRI principle.
Agriculture in the 21st century must be practiced differently from the previous century; land and water resources are becoming relatively scarcer, of poorer quality, or less reliable. Climatic conditions are in many places becoming more adverse, especially for smallholding farmers. More than ever, they need cropping practices that are more ‘climate-proof.’ By promoting better root growth and more abundant life in the soil, SCI offers millions of insecure, disadvantaged households better opportunities.
Wheat (Triticum)
The extension of SRI practices to wheat, the next most important cereal crop after rice, was fairly quickly seized upon by farmers and researchers in India, Ethiopia, Mali and Nepal. SWI was first tested in 2008 by the People’s Science Institute (PSI) which works with farmers in Himachal Pradesh and Uttarakhand states. Yield estimates showed a 91% increase for unirrigated SWI plots over usual methods in rainfed areas, and a 82% increase for irrigated SWI. This has encouraged an expansion of SWI in these two states.
The most rapid growth and most dramatic results have been in Bihar state of India, where 415 farmers, mostly women, tried SWI methods in 2008/09, with yields averaging 3.6 tons/ha, compared with 1.6 tons/ha using usual practices. The next year, 15,808 farmers used SWI with average yields of 4.6 tons/ha. In the past year, 2011/12, the SWI area in Bihar was reported to be 183,063 hectares, with average yields of 5.1 tons/ha. With SWI management, net income per acre from wheat has been calculated by the NGO PRADAN to rise from Rs. 6,984 to Rs. 17,581, with costs reduced while yields increased. This expansion has been done under the auspices of the Bihar Rural Livelihood Promotion Society, supported by the International Development Association (IDA) of the World Bank.
About the same time, farmers in northern Ethiopia started on-farm trials of SWI, assisted by the Institute for Sustainable Development (ISD), supported by a grant from Oxfam America. Seven farmers in 2009 averaged 5.45 tons/ha with SWI methods, the highest reaching 10 tons/ha. There was a larger set of on-farm trials in South Wollo in 2010. SWI yields averaged 4.7 tons/ha with compost and 4.9 tons/ha with inorganic nitrogen (urea) and phosphorus (DAP). The 4% increase in yield was not enough to justify the cost of purchasing and applying fertilizer. The control plots averaged wheat yields of 1.8 tons/ha.
In 2008-09, farmer trials with SWI methods were started in the Timbuktu region of Mali, where it was learned that transplanting young seedlings was not as effective as direct seeding, while SRI spacing of 25cm x 25cm proved to be too great. Still, obtaining a 10% higher yield with a 94% reduction in seed (10 kg/ha vs. 170 kg/ha), a 40% reduction in labor, and a 30% reduction in water requirements encouraged farmers to continue with their experiments.
In 2009/10, the NGO Africare undertook systematic replicated trials in Timbuktu, evaluating a number of different methods of crop establishment, including direct seeding in spacing combinations from 10 to 20 cm, line sowing, transplanting of seedlings, and control plots, all on farmers’ fields. Compared to the control average (2.25 tons/ha), the SWI transplanting method and 15×15 cm direct seeding gave the greatest yield response, 5.4 tons/ha, an increase of 140%.
SWI evaluations were also done in 2010 in the Far Western region of Nepal by the NGO Mercy Corps, under the EU-FAO Food Facility Programme. The control level of yield was 3.4 tons/ ha using local practices with a local variety. Growing a modern variety with local practices added 10% to yield (3.74 tons/ha); however, using SWI practices the same modern variety raised yield by 91%, reaching a yield of 6.5 tons/ha.
Mustard (Rapeseed/Canola)
Farmers in Bihar state of India have recently begun adapting SRI methods for growing mustard (aka rapeseed or canola). In 2009-10, 7 women farmers in Gaya district working with PRADAN and the government’s ATMA agency started applying SRI practices to their mustard crop. This gave them an average grain yield of 3 tons/ha, three times their usual 1 t/ha.
The following year, 283 women farmers who used SMI methods averaged 3.25 tons/ha. In 2011-12, 1,636 farmers practiced SMI with an average yield of 3.5 tons/ha. Those who used all of the practices as recommended averaged 4 tons/ha, and one reached a yield of 4.92 tons/ha as measured by government technicians. With SMI, farmers’ costs of production were reduced by half, from Rs. 50 per kg of grain to just Rs. 25 per kilogram.
Sugarcane (Saccarum officinarum)
Shortly after they began using SRI methods in 2004, farmers in Andhra Pradesh state of India began also adapting these ideas and practices to their sugarcane production. Some farmers got as much as three times more yield, cutting their planting materials by 80-90%, and introducing much wider spacing of plants, using more compost and mulch to enhance soil organic matter (and control weeds), with sparing use of irrigation water and much reduced use of chemical fertilizers and agrochemical sprays.
By 2009, there had been enough testing, demonstration and modification of these initial practices, e.g., cutting out the buds from cane stalks and planting them in soil or other rooting material to produce healthy seedlings that could be transplanted with very wide spacing, that the joint Dialogue Project on Food, Water and Environment of the World Wide Fund for Nature (WWF) and the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT) in Hyderabad launched a ‘sustainable sugarcane initiative’ (SSI). The project published a manual that described and explained the suite of methods derived from SRI experience that could raise cane yields by 30% or more, with reduced requirements for both water and chemical fertilizer.
The director of the Dialogue Project, Dr. Biksham Gujja together with other SRI and SSI colleagues established a pro bono company AgSRI in 2010 to disseminate knowledge and practice of these ecologically-friendly innovations among farmers in India and beyond.
The first international activity of AgSRI has been to share information on SSI with sugar growers on the Camilo Cienfuegos production cooperative in Bahia Honda, Cuba. A senior sugar agronomist, Lauro Fanjùl from the Ministry of Sugar, when visiting the cooperative to inspect its SSI crop, was amazed at the size, vigor and color of the canes, noting that they were ‘still growing.’
Finger Millet (Eleusine coracana)
Some of the first examples of SCI came from farmers in several states of India who had either applied SRI ideas to finger millet (ragi in local languages), or by their own observations and experimentation devised a more productive cropping system for finger millet that utilized SRI principles.
The NGO Green Foundation in Bangalore in the early ’00s learned that farmers in Haveri district of Karnataka State had devised a system for growing ragi that they call Guli Vidhana (square planting). Young seedlings are planted in a square grid, 2 per hill, spaced 18 inches (45 cm) apart, with organic fertilization. One implement they use stimulates greater tillering and root growth when it is pulled across the field in different directions; and another breaks up the topsoil while weeding between and across rows. In contrast with conventional methods, which yield around 1.25 to 2 tons/ha, with up to 3.25 tons using fertilizer inputs, Guli Vidhana methods yield 4.5 to 5 tons/ha, with a maximum yield so far of 6.25 tons.
In Jharkhand state of India in 2005, farmers working with the NGO PRADAN began experimenting with SRI methods for their rainfed finger millet. Usual yields there were 750 kg to 1 ton/ha with traditional broadcasting practices. Yields with transplanted SFMI have averaged 3-4 tons/ha. Costs of production per kg of grain are reduced by 60% with SFMI management, from Rs. 34.00 to Rs. 13.50. In Ethiopia, one farmer using her own version of SRI practices for finger millet is reported by the Institute for Sustainable Development to have obtained a yield of 7.6 tons/ha.
Maize (Zea mays)
Growing maize using SRI concepts and methods has not been experimented with very much yet; but in northern India the People’s Science Institute in Dehradun has worked with smallholders in Uttarakhand and Himachal Pradesh states to improve their maize production with adapted SRI practices.
No transplanting is involved, and no irrigation. Farmers are planting 1-2 seeds per hill with square spacing of 30×30 cm, having added compost and other organic matter to the soil, and then doing three soil-aerating weedings. Some varieties they have found performing best at 30×50 cm spacing. The number of farmers practicing this kind of SCI went from 183 in 2009 on 10.34 hectares of land, to 582 farmers on 63.61 ha in 2010. With these alternative methods, the average yields have been 3.5 tons/hectare. This is 75% more than their yields with conventional management, which have averaged 2 tons/hectare.
Because maize is such an important food crop for many millions of food-insecure households, getting more production from their limited land resources, with their present varieties or with improved ones, should be a priority.
Turmeric (Curcuma longa)
Farmers in Thambal village, Salem district in Tamil Nadu state of India were the first to establish an SRI Farmers Association in their country, as far as is known. Their appreciation for SRI methods led them to begin experimentation with the extension of these ideas to their off-season production of turmeric, a rhizome crop that gives farmers a good income when sold for use as a spice in Indian cooking.
With this methodology, planting material is reduced by more than 80%, by using much smaller rhizome portions to start seedlings. These are transplanted with wider spacing (30×40 cm instead of 30×30 cm), and organic means of fertilization are used (green manure plus vermicompost, Trichoderma, Pseudomonas, and a biofertilizer mixture known as EM, Effective Microorganisms, developed in Japan by T. Higa). Water requirements are cut by two-thirds. With yields 25% higher and with lower costs of production, farmers’ net income from their turmeric crop can be effectively doubled.
Tef (Eragrostis tef)
Adaptations of SRI ideas for the increased production of tef, the most important cereal grain for Ethiopians, started in 2008-09 under the direction of Dr. Tareke Berhe, at the time director of the Sasakawa Africa Association’s regional rice program, based in Addis Ababa. Having grown up in a household which raised tef, and then written theses on tef for his M.Sc. (Washington State University) and Ph.D. (University of Nebraska), Berhe was thoroughly knowledgeable, both practically and theoretically, with this crop.
Typical yields for tef grown with traditional practices, based on broadcasting, are about 1 ton/ha. The seed of tef is tiny — even smaller than mustard seed, about 2500 seeds making only 1 gram — so growing and transplanting tef seedlings seemed far-fetched. But Berhe found that transplanting young seedlings at 20×20 cm spacing with organic and inorganic fertilization gave yields of 3 to 5 tons/ha. With small amendments of micronutrients (Zn, Cu, Mg, Mn), these yields could be almost doubled again. Such potential within the tef genome, responding to good soil conditions and wider spacing, had not been seen before. Berhe is calling these alternative production methods the System of Tef Intensification (STI).
In 2010, with a grant from Oxfam America, Dr. Berhe conducted STI trials and demonstrations at Debre Zeit Agricultural Research Center and Mekelle University, major centers for agricultural research in Ethiopia. Their good results gained acceptance for the new practices. He is now serving as an advisor for tef to the Ethiopian government’s Agricultural Transformation Agency (ATA), with support from the Bill and Melinda Gates Foundation.
This year, 7,000 farmers are using STI methods in an expanded trial, and another 100,000 farmers are using less ‘intensified’ methods based on the same SRI principles, not transplanting but having wider spacing of plants with row seeding. As with other crops, tef is quite responsive to management practices that do not crowd the plants together and that improve the soil conditions for abundant root growth.
That SRI principles and methods could be extended from rice to wheat, finger millet, sugarcane, maize, and even tef was not so surprising, since these are all monocotyledons, the grasses and grass-like plants whose stalks and leaves grow from their base. That mustard would respond very well to SRI management practices was unexpected, because it is a dicotyledon, i.e., a flowering plant with its leaves growing from stems rather than from the base of the plant. It is now being found that a number of leguminous crops, also dicotyledons, can benefit from practices inspired by SRI experience.
The Bihar Rural Livelihoods Support Program, Patna, has reported tripled yield from mung bean (green gram) with SCI methods, raising production on farmers’ fields from 625 kg/ha to 1.875 tons/ha. With adapted SRI practices, the People’s Science Institute in Dehradun reports that small farmers in Uttarakhand state of India are getting:
65% increase for lentils (black gram), up from 850 kg/ha to 1.4 tons/ha;
50% increase for soya bean, going from 2.2 to 3.3 tons/ha;
67% increase for kidney beans, going from 1.8 to 3.0 tons/ha;
42% increase for peas, going from 2.13 to 3.02 tons/ha.
No transplanting is involved, but the seeds are sown, 1-2 per hill, with wide spacing – 20x30cm, 25x30cm, or 30×30 cm for most of these crops, and as much as 15/20×30/45cm for peas. Two or more weedings are done, preferably with soil aeration to enhance root growth.
Fertilization is organic, applying compost augmented by a trio of indigenous organic fertilizers known locally as PAM (panchagavya, amritghol and matkakhad). Panchagavya is a mixture of five products from cattle: ghee (clarified butter), milk, curd (yoghurt), dung and urine, which particularly appears to stimulate the growth of beneficial soil organisms. Seeds are treated before planting with cow urine to make them more resistant to pests and disease.
This production strategy can be considered ‘labor intensive’ but households seeking to get maximum yield from the small areas of land available to them find that the additional effort and care give net returns as well as more security. The resulting crops are more robust, resistant both to pest and disease damage and to adverse climatic conditions.
Vegetables
The extension of SRI concepts and practices to vegetables has been a farmer-led innovation, and has progressed farthest in Bihar State of India. The Bihar Rural Livelihoods Promotion Society (BRLPS), working under the state government, with NGOs such as PRADAN leading the field operations and having financial support from the IDA of the World Bank, has been promoting and evaluating SCI efforts among women’s self-help groups to raise their vegetable production.
Women farmers in Bihar have experimented with planting young seedlings widely and carefully, placing them into dug pits that are back-filled with loose soil and organic soil amendments such as vermicompost. Water is used very precisely and carefully. While this system is labor-intensive, it increases yields greatly and benefits particularly the very poorest households. They have access to very little land and water, and they need to use these resources with maximum productivity and little cash expenditure.
A recent article on using SRI methods with vegetables concluded: “It is found that in SRI, SWI & SCI, the disease & pest infestations are less, use of agro chemicals are lesser, requires less water, can sustain water-stressed condition; with more application of organic matter, yields in terms of grain, fodder & firewood are higher.” (from a background paper prepared for the National Colloquium on System of Crop Intensification (SCI), Patna, India, March 2, 2011).
Trials in Ethiopia conducted by the NGO ISD have also shown good results. Readers can learn more about how these ideas are being adapted for very poor, water-stressed Ethiopian households in Tigray province here (Brochure at: http://www.isd.org.et/Publications/Planting%20with%20space%20brochure.pdf).
Conclusion
Philosophically, SRI can be understood as an integrated system of plant-centered agriculture. Fr. Laulanié, who developed SRI thinking and practice during his 34 years in Madagascar, in one of his last papers commented that he did this by learning from the rice plant; the rice plant is my teacher (mon maître) he wrote. Each of the component activities of SRI has the goal of maximally providing whatever a plant is likely to need in terms of space, light, air, water, and nutrients. It also creates favorable conditions for the growth and prospering of beneficial soil organisms in, on and around the plant. SRI thus presents us with the question: if one can provide, in every way, the best possible environment for plants to grow, what benefits and synergisms will we see?
Already, approximately 4-5 million farmers around the world are using SRI methods with rice. The success of SRI methods can be attributed to many factors. They are low risk, they don’t require farmers to have access to any unfamiliar technologies, they save money on multiple inputs, while higher yields earn them more. Most important is that farmers can readily see the benefits for themselves.
Consequently, many farmers are gaining confidence in their ability to get ‘more from less’ by modifying their crop management practices. They can provide for their families’ food security, obtain surpluses, and avoid indebtedness. In the process, they are enhancing the quality of their soil resources and are buffering their crops against the temperature and precipitation stresses of climate change.
Where this process will end, nobody knows. Almost invariably SRI results in far greater yields, but some farmers go beyond others’ results to achieve super-yields for reasons that are not fully clear. Although experience increasingly points to the contributions of the plants’ microbiome, it also suggests that the optimization process is still at the beginning.
References
Sharif A (2011). Technical adaptations for mechanized SRI production to achieve water saving and increased profitability in Punjab, Pakistan. Paddy and Water Environment 9: 111-119.
Thakur AK, Uphoff N and Antony E (2009) an assessment of physiological effects of system of rice intensification (SRI) practices compared with recommended rice cultivation practices in India. Experimental Agric. 46: 77-98.
Uphoff N (2012). Raising smallholder food crop yields with climate-smart agricultural practices. Report accompanying presentation on ‘The System of Rice Intensification (SRI) and Beyond: Coping with Climate Change,’ made at World Bank, Washington, DC, October 10.
Uphoff N, Chi F, Dazzo FB , Rodriguez RJ (2012) Soil fertility as a contingent rather than inherent characteristic: Considering the contributions of crop-symbiotic soil biota. In Principles of Sustainable Soil Systems in Agroecosystems,, eds. R. Lal and B. Stewart. Boca Raton FL: Taylor & Francis, in press.
Zhao LM, Wu LH, Li Y, Lu X, Zhu DF and Uphoff, N (2009) Influence of the system of rice intensification on rice yield and nitrogen and water use efficiency with different N application rates. Experimental Agric. 45: 275–286.
How should a regulatory agency announce they have discovered something potentially very important about the safety of products they have been approving for over twenty years?
In the course of analysis to identify potential allergens in GMO crops, the European Food Safety Authority (EFSA) has belatedly discovered that the most common genetic regulatory sequence in commercial GMOs also encodes a significant fragment of a viral gene (Podevin and du Jardin 2012). This finding has serious ramifications for crop biotechnology and its regulation, but possibly even greater ones for consumers and farmers. This is because there are clear indications that this viral gene (called Gene VI) might not be safe for human consumption. It also may disturb the normal functioning of crops, including their natural pest resistance.
What Podevin and du Jardin discovered is that of the 86 different transgenic events (unique insertions of foreign DNA) commercialized to-date in the United States 54 contain portions of Gene VI within them. They include any with a widely used gene regulatory sequence called the CaMV 35S promoter (from the cauliflower mosaic virus; CaMV). Among the affected transgenic events are some of the most widely grown GMOs, including Roundup Ready soybeans (40-3-2) and MON810 maize. They include the controversial NK603 maize recently reported as causing tumors in rats (Seralini et al. 2012).
The researchers themselves concluded that the presence of segments of Gene VI “might result in unintended phenotypic changes”. They reached this conclusion because similar fragments of Gene VI have already been shown to be active on their own (e.g. De Tapia et al. 1993). In other words, the EFSA researchers were unable to rule out a hazard to public health or the environment.
In general, viral genes expressed in plants raise both agronomic and human health concerns (reviewed in Latham and Wilson 2008). This is because many viral genes function to disable their host in order to facilitate pathogen invasion. Often, this is achieved by incapacitating specific anti-pathogen defenses. Incorporating such genes could clearly lead to undesirable and unexpected outcomes in agriculture. Furthermore, viruses that infect plants are often not that different from viruses that infect humans. For example, sometimes the genes of human and plant viruses are interchangeable, while on other occasions inserting plant viral fragments as transgenes has caused the genetically altered plant to become susceptible to an animal virus (Dasgupta et al. 2001). Thus, in various ways, inserting viral genes accidentally into crop plants and the food supply confers a significant potential for harm.
The Choices for Regulators
The original discovery by Podevin and du Jardin (at EFSA) of Gene VI in commercial GMO crops must have presented regulators with sharply divergent procedural alternatives. They could 1) recall all CaMV Gene VI-containing crops (in Europe that would mean revoking importation and planting approvals) or, 2) undertake a retrospective risk assessment of the CaMV promoter and its Gene VI sequences and hope to give it a clean bill of health.
It is easy to see the attraction for EFSA of option two. Recall would be a massive political and financial decision and would also be a huge embarrassment to the regulators themselves. It would leave very few GMO crops on the market and might even mean the end of crop biotechnology.
Regulators, in principle at least, also have a third option to gauge the seriousness of any potential GMO hazard. GMO monitoring, which is required by EU regulations, ought to allow them to find out if deaths, illnesses, or crop failures have been reported by farmers or health officials and can be correlated with the Gene VI sequence. Unfortunately, this particular avenue of enquiry is a scientific dead end. Not one country has carried through on promises to officially and scientifically monitor any hazardous consequences of GMOs (1).
Unsurprisingly, EFSA chose option two. However, their investigation resulted only in the vague and unreassuring conclusion that Gene VI “might result in unintended phenotypic changes” (Podevin and du Jardin 2012). This means literally, that changes of an unknown number, nature, or magnitude may (or may not) occur. It falls well short of the solid scientific reassurance of public safety needed to explain why EFSA has not ordered a recall.
Can the presence of a fragment of virus DNA really be that significant? Below is an independent analysis of Gene VI and its known properties and their safety implications. This analysis clearly illustrates the regulators’ dilemma.
The Many Functions of Gene VI
Gene VI, like most plant viral genes, produces a protein that is multifunctional. It has four (so far) known roles in the viral infection cycle. The first is to participate in the assembly of virus particles. There is no current data to suggest this function has any implications for biosafety. The second known function is to suppress anti-pathogen defenses by inhibiting a general cellular system called RNA silencing (Haas et al. 2008). Thirdly, Gene VI has the highly unusual function of transactivating (described below) the long RNA (the 35S RNA) produced by CaMV (Park et al. 2001). Fourthly, unconnected to these other mechanisms, Gene VI has very recently been shown to make plants highly susceptible to a bacterial pathogen (Love et al. 2012). Gene VI does this by interfering with a common anti-pathogen defense mechanism possessed by plants. These latter three functions of Gene VI (and their risk implications) are explained further below:
1) Gene VI Is an Inhibitor of RNA Silencing
RNA silencing is a mechanism for the control of gene expression at the level of RNA abundance (Bartel 2004). It is also an important antiviral defense mechanism in both plants and animals, and therefore most viruses have evolved genes (like Gene VI) that disable it (Dunoyer and Voinnet 2006).
This attribute of Gene VI raises two obvious biosafety concerns: 1) Gene VI will lead to aberrant gene expression in GMO crop plants, with unknown consequences and, 2) Gene VI will interfere with the ability of plants to defend themselves against viral pathogens. There are numerous experiments showing that, in general, viral proteins that disable gene silencing enhance infection by a wide spectrum of viruses (Latham and Wilson 2008).
2) Gene VI Is a Unique Transactivator of Gene Expression
Multicellular organisms make proteins by a mechanism in which only one protein is produced by each passage of a ribosome along a messenger RNA (mRNA). Once that protein is completed the ribosome dissociates from the mRNA. However, in a CaMV-infected plant cell, or as a transgene, Gene VI intervenes in this process and directs the ribosome to get back on an mRNA (reinitiate) and produce the next protein in line on the mRNA, if there is one. This property of Gene VI enables Cauliflower Mosaic Virus to produce multiple proteins from a single long RNA (the 35S RNA). Importantly, this function of Gene VI (which is called transactivation) is not limited to the 35S RNA. Gene VI seems able to transactivate any cellular mRNA (Futterer and Hohn 1991; Ryabova et al. 2002). There are likely to be thousands of mRNA molecules having a short or long protein coding sequence following the primary one. These secondary coding sequences could be expressed in cells where Gene VI is expressed. The result will presumably be production of numerous random proteins within cells. The biosafety implications of this are difficult to assess. These proteins could be allergens, plant or human toxins, or they could be harmless. Moreover, the answer will differ for each commercial crop species into which Gene VI has been inserted.
3) Gene VI Interferes with Host Defenses
A very recent finding, not known by Podevin and du Jardin, is that Gene VI has a second mechanism by which it interferes with plant anti-pathogen defenses (Love et al. 2012). It is too early to be sure about the mechanistic details, but the result is to make plants carrying Gene VI more susceptible to certain pathogens, and less susceptible to others. Obviously, this could impact farmers, however the discovery of an entirely new function for gene VI while EFSA’s paper was in press, also makes clear that a full appraisal of all the likely effects of Gene VI is not currently achievable.
Is There a Direct Human Toxicity Issue?
When Gene VI is intentionally expressed in transgenic plants, it causes them to become chlorotic (yellow), to have growth deformities, and to have reduced fertility in a dose-dependent manner (Ziljstra et al 1996). Plants expressing Gene VI also show gene expression abnormalities. These results indicate that, not unexpectedly given its known functions, the protein produced by Gene VI is functioning as a toxin and is harmful to plants (Takahashi et al 1989). Since the known targets of Gene VI activity (ribosomes and gene silencing) are also found in human cells, a reasonable concern is that the protein produced by Gene VI might be a human toxin. This is a question that can only be answered by future experiments.
Is Gene VI Protein Produced in GMO Crops?
Given that expression of Gene VI is likely to cause harm, a crucial issue is whether the actual inserted transgene sequences found in commercial GMO crops will produce any functional protein from the fragment of Gene VI present within the CaMV sequence.
There are two aspects to this question. One is the length of Gene VI accidentally introduced by developers. This appears to vary but most of the 54 approved transgenes contain the same 528 base pairs of the CaMV 35S promoter sequence. This corresponds to approximately the final third of Gene VI. Deleted fragments of Gene VI are active when expressed in plant cells and functions of Gene VI are believed to reside in this final third. Therefore, there is clear potential for unintended effects if this fragment is expressed (e.g. De Tapia et al. 1993; Ryabova et al. 2002; Kobayashi and Hohn 2003).
The second aspect of this question is what quantity of Gene VI could be produced in GMO crops? Once again, this can ultimately only be resolved by direct quantitative experiments. Nevertheless, we can theorize that the amount of Gene VI produced will be specific to each independent insertion event. This is because significant Gene VI expression probably would require specific sequences (such as the presence of a gene promoter and an ATG [a protein start codon]) to precede it and so is likely to be heavily dependent on variables such as the details of the inserted transgenic DNA and where in the plant genome the transgene inserted.
Commercial transgenic crop varieties can also contain superfluous copies of the transgene, including those that are incomplete or rearranged (Wilson et al 2006). These could be important additional sources of Gene VI protein. The decision of regulators to allow such multiple and complex insertion events was always highly questionable, but the realization that the CaMV 35S promoter contains Gene VI sequences provides yet another reason to believe that complex insertion events increase the likelihood of a biosafety problem.
Even direct quantitative measurements of Gene VI protein in individual crop authorizations would not fully resolve the scientific questions, however. No-one knows, for example, what quantity, location or timing of protein production would be of significance for risk assessment, and so answers necessary to perform science-based risk assessment are unlikely to emerge soon.
Big Lessons for Biotechnology
It is perhaps the most basic assumption in all of risk assessment that the developer of a new product provides regulators with accurate information about what is being assessed. Perhaps the next most basic assumption is that regulators independently verify this information. We now know, however, that for over twenty years neither of those simple expectations have been met. Major public universities, biotech multinationals, and government regulators everywhere, seemingly did not appreciate the relatively simple possibility that the DNA constructs they were responsible for encoded a viral gene.
This lapse occurred despite the fact that Gene VI was not truly hidden; the relevant information on the existence of Gene VI has been freely available in the scientific literature since well before the first biotech approval (Franck et al 1980). We ourselves have offered specific warnings that viral sequences could contain unsuspected genes (Latham and Wilson 2008). The inability of risk assessment processes to incorporate longstanding and repeated scientific findings is every bit as worrysome as the failure to intellectually anticipate the possibility of overlapping genes when manipulating viral sequences.
This sense of a generic failure is reinforced by the fact that this is not an isolated event. There exist other examples of commercially approved viral sequences having overlapping genes that were never subjected to risk assessment. These include numerous commercial GMOs containing promoter regions of the closely related virus figwort mosaic virus (FMV) which were not considered by Podevin and du Jardin. Inspection of commercial sequence data shows that the commonly used FMV promoter overlaps its own Gene VI (Richins et al 1987). A third example is the virus-resistant potato NewLeaf Plus (RBMT-22-82). This transgene contains approximately 90% of the P0 gene of potato leaf roll virus. The known function of this gene, whose existence was discovered only after US approval, is to inhibit the anti-pathogen defenses of its host (Pfeffer et al 2002). Fortunately, this potato variety was never actively marketed.
A further key point relates to the biotech industry and their campaign to secure public approval and a permissive regulatory environment. This has led them to repeatedly claim, firstly, that GMO technology is precise and predictable; and secondly, that their own competence and self-interest would prevent them from ever bringing potentially harmful products to the market; and thirdly, to assert that only well studied and fully understood transgenes are commercialized. It is hard to imagine a finding more damaging to these claims than the revelations surrounding Gene VI.
Biotechnology, it is often forgotten, is not just a technology. It is an experiment in the proposition that human institutions can perform adequate risk assessments on novel living organisms. Rather than treat that question as primarily a daunting scientific one, we should for now consider that the primary obstacle will be overcoming the much more mundane trap of human complacency and incompetence. We are not there yet, and therefore this incident will serve to reinforce the demands for GMO labeling in places where it is absent.
What Regulators Should Do Now
This summary of the scientific risk issues shows that a segment of a poorly characterized viral gene never subjected to any risk assessment (until now) was allowed onto the market. This gene is currently present in commercial crops and growing on a large scale. It is also widespread in the food supply.
Even now that EFSA’s own researchers have belatedly considered the risk issues, no one can say whether the public has been harmed, though harm appears a clear scientific possibility. Considered from the perspective of professional and scientific risk assessment, this situation represents a complete and catastrophic system failure.
But the saga of Gene VI is not yet over. There is no certainty that further scientific analysis will resolve the remaining uncertainties, or provide reassurance. Future research may in fact increase the level of concern or uncertainty, and this is a possibility that regulators should weigh heavily in their deliberations.
To return to the original choices before EFSA, these were either to recall all CaMV 35S promoter-containing GMOs, or to perform a retrospective risk assessment. This retrospective risk assessment has now been carried out and the data clearly indicate a potential for significant harm. The only course of action consistent with protecting the public and respecting the science is for EFSA, and other jurisdictions, to order a total recall. This recall should also include GMOs containing the FMV promoter and its own overlapping Gene VI.
Footnotes
1) EFSA regulators might now be regretting their failure to implement meaningful GMO monitoring. It would be a good question for European politicians to ask EFSA and for the board of EFSA to ask the GMO panel, whose job it is to implement monitoring.
References
Bartel P (2004) MicroRNAs: Genomics, Biogenesis, Mechanism, and Function. Cell: 116, 281-297.
Dasgupta R , Garcia BH, Goodman RM (2001) Systemic spread of an RNA insect virus in plants expressing plant viral movement protein genes. Proc. Natl. Acad. Sci. USA 98: 4910-4915.
De Tapia M, Himmelbach A, and Hohn T (1993) Molecular dissection of the cauliflower mosaic virus translation transactivator. EMBO J 12: 3305-14.
Dunoyer P, and O Voinnet (2006) The complex interplay between plant viruses and host RNA-silencing pathways. Curr Opinion in Plant Biology 8: 415–423.
Franck A, H Guilley, G Jonard, K Richards and L Hirth (1980) Nucleotide sequence of cauliflower mosaic virus DNA. Cell 2: 285-294.
Futterer J, and T Hohn (1991) Translation of a polycistronic mRNA in presence of the cauliflower mosaic virus transactivator protein. EMBO J. 10: 3887-3896.
Haas G, Azevedo J, Moissiard G, Geldreich A, Himber C, Bureau M, et al. (2008) Nuclear import of CaMV P6 is required for infection and suppression of the RNA silencing factor DRB4. EMBO J 27: 2102-12.
Kobayashi K, and T Hohn (2003) Dissection of Cauliflower Mosaic Virus Transactivator/Viroplasmin Reveals Distinct Essential Functions in Basic Virus Replication. J. Virol. 77: 8577–8583. Latham JR, and AK Wilson (2008) Transcomplementation and Synergism in Plants: Implications for Viral Transgenes? Molecular Plant Pathology 9: 85-103.
Park H-S, Himmelbach A, Browning KS, Hohn T, and Ryabova LA (2001). A plant viral ‘‘reinitiation’’ factor interacts with the host translational machinery. Cell 106: 723–733.
Pfeffer S, P Dunoyer, F Heim, KE Richards, G Jonard, V Ziegler-Graff (2002) P0 of Beet Western Yellows Virus Is a Suppressor of Posttranscriptional Gene Silencing. J. Virol. 76: 6815–6824.
Podevin N and du Jardin P (2012) Possible consequences of the overlap between the CaMV 35S promoter regions in plant transformation vectors used and the viral gene VI in transgenic plants. GM Crops and Food 3: 1-5.
Love AJ , C Geri, J Laird, C Carr, BW Yun, GJ Loake et al (2012) Cauliflower mosaic virus Protein P6 Inhibits Signaling Responses to Salicylic Acid and Regulates Innate Immunity. PLoS One. 7(10): e47535.
Richins R, H Scholthof, RJ Shepherd (1987) Sequence of figwort mosaic virus DNA (caulimovirus group). NAR 15: 8451-8466.
Ryabova LA , Pooggin, MH and Hohn, T (2002) Viral strategies of translation initiation: Ribosomal shunt and reinitiation. Progress in Nucleic Acid Research and Molecular Biology 72: 1-39.
Séralini, G-E., E. Clair, R. Mesnage, S. Gress, N. Defarge, M. Malatesta, D. Hennequin, J. Spiroux de Vendômois. 2012. Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize.Food Chem. Toxicol.
Takahashi H, K Shimamoto, Y Ehara (1989) Cauliflower mosaic virus gene VI causes growth suppression, development of necrotic spots and expression of defence-related genes in transgenic tobacco plants. Molecular and General Genetics 216:188-194. Wilson AK, JR Latham and RA Steinbrecher (2006) Transformation-induced mutations in transgenic plants: Analysis and biosafety implications. Biotechnology and Genetic Engineering Reviews 23: 209-234.
Zijlstra C, Schärer-Hernández N, Gal S, Hohn T. Arabidopsis thaliana expressing the cauliflower mosaic virus ORF VI transgene has a late flowering phenotype. Virus Genes 1996; 13:5-17.