A Few Thoughts and References Re Conservation and Synthetic Biology

Yesterday at Synthetic Biology 7.0 in Singapore, we had a good discussion about the intersection of conservation, biodiversity, and synthetic biology. I said I would post a few papers relevant to the discussion, which are below.

These papers are variously: the framing document for the original meeting at the University of Cambridge in 2013 (see also "Harry Potter and the Future of Nature"), sponsored by the Wildlife Conservation Society; follow on discussions from meetings in San Francisco and Bellagio; and my own efforts to try to figure out how quantify the economic impact of biotechnology (which is not small, especially when compared to much older industries) and the economic damage from invasive species and biodiversity loss (which is also not small, measured as either dollars or jobs lost). The final paper in this list is my first effort to link conservation and biodiversity with economic and physical security, which requires shifting our thinking from the national security of nation states and their political boundaries to the natural security of the systems and resources that those nation states rely on for continued existence.

"Is It Time for Synthetic Biodiversity Conservation?", Antoinette J. Piaggio1, Gernot Segelbacher, Philip J. Seddon, Luke Alphey, Elizabeth L. Bennett, Robert H. Carlson, Robert M. Friedman, Dona Kanavy, Ryan Phelan, Kent H. Redford, Marina Rosales, Lydia Slobodian, Keith WheelerTrends in Ecology & Evolution, Volume 32, Issue 2, February 2017, Pages 97–107

Robert Carlson, "Estimating the biotech sector's contribution to the US economy", Nature Biotechnology, 34, 247–255 (2016), 10 March 2016

Kent H. Redford, William Adams, Rob Carlson, Bertina Ceccarelli, “Synthetic biology and the conservation of biodiversity”, Oryx, 48(3), 330–336, 2014.

"How will synthetic biology and conservation shape the future of nature?", Kent H. Redford, William Adams, Georgina Mace, Rob Carlson, Steve Sanderson, Framing Paper for International Meeting, Wildlife Conservation Society, April 2013.

"From national security to natural security", Robert Carlson, Bulletin of the Atomic Scientists, 11 Dec 2013.

Late Night, Unedited Musings on Synthesizing Secret Genomes

By now you have probably heard that a meeting took place this past week at Harvard to discuss large scale genome synthesis. The headline large genome to synthesize is, of course, that of humans. All 6 billion (duplex) bases, wrapped up in 23 pairs of chromosomes that display incredible architectural and functional complexity that we really don't understand very well just yet. So no one is going to be running off to the lab to crank out synthetic humans. That 6 billion bases, by the way, just for one genome, exceeds the total present global demand for synthetic DNA. This isn't happening tomorrow. In fact, synthesizing a human genome isn't going to happen for a long time.

But, if you believe the press coverage, nefarious scientists are planning pull a Frankenstein and "fabricate" a human genome in secret. Oh, shit! Burn some late night oil! Burn some books! Wait, better — burn some scientists! Not so much, actually. There are a several important points here. I'll take them in no particular order.

First, it's true, the meeting was held behind closed doors. It wasn't intended to be so, originally. The rationale given by the organizers for the change is that a manuscript on the topic is presently under review, and the editor of the journal considering the manuscript made it clear that it considers the entire topic under embargo until the paper is published. This put the organizers in a bit of a pickle. They decided the easiest way to comply with the editor's wishes (which were communicated to the authors well after the attendees had made travel plans) was to hold the meeting under rules even more strict than Chatham House until the paper is published. At that point, they plan to make a full record of the meeting available. It just isn't a big deal. If it sounds boring and stupid so far, it is. The word "secret" was only introduced into the conversation by a notable critic who, as best I can tell, perhaps misconstrued the language around the editor's requirement to respect the embargo. A requirement that is also boring and stupid. But, still, we are now stuck with "secret", and all the press and bloggers who weren't there are seeing Watergate headlines and fame. Still boring and stupid.

Next, It has been reported that there were no press at the meeting. However, I understand that there were several reporters present. It has also been suggested that the press present were muzzled. This is a ridiculous claim if you know anything about reporters. They've simply been asked to respect the embargo, which so far they are doing, just like they do with every other embargo. (Note to self, and to readers: do not piss off reporters. Do not accuse them of being simpletons or shills. Avoid this at all costs. All reporters are brilliant and write like Hemingway and/or Shakespeare and/or Oliver Morton / Helen Branswell / Philip Ball / Carl Zimmer / Erica Check-Hayden. Especially that one over there. You know who I mean. Just sayin'.)

How do I know all this? You can take a guess, but my response is also covered by the embargo.

Moving on: I was invited to the meeting in question, but could not attend. I've checked the various associated correspondence, and there's nothing about keeping it "secret". In fact, the whole frickin' point of coupling the meeting to a serious, peer-reviewed paper on the topic was to open up the conversation with the public as broadly as possible. (How do you miss that unsubtle point, except by trying?) The paper was supposed to come out before, or, at the latest, at the same time as the meeting. Or, um, maybe just a little bit after? But, whoops. Surprise! Academic publishing can be slow and/or manipulated/politicized. Not that this happened here. Anyway, get over it. (Also: Editors! And, reviewers! And, how many times will I say "this is the last time!")

(Psst: an aside. Science should be open. Biology, in particular, should be done in the public view and should be discussed in the open. I've said and written this in public on many occasions. I won't bore you with the references. [Hint: right here.] But that doesn't mean that every conversation you have should be subject to review by the peanut gallery right now. Think of it like a marriage/domestic partnership. You are part of society; you have a role and a responsibility, especially if you have children. But that doesn't mean you publicize your pillow talk. That would be deeply foolish and would inevitably prevent you from having honest conversations with your spouse. You need privacy to work on your thinking and relationships. Science: same thing. Critics: fuck off back to that sewery rag in — wait, what was I saying about not pissing off reporters?)

Is this really a controversy? Or is it merely a controversy because somebody said it is? Plenty of people are weighing in who weren't there or, undoubtedly worse from their perspective, weren't invited and didn't know it was happening. So I wonder if this is more about drawing attention to those doing the shouting. That is probably unfair, this being an academic discussion, full of academics.

Secondly (am I just on secondly?), the supposed ethical issues. Despite what you may read, there is no rush. No human genome, nor any human chromosome, will be synthesized for some time to come. Make no mistake about how hard a technical challenge this is. While we have some success in hand at synthesizing yeast chromosomes, and while that project certainly serves as some sort of model for other genomes, the chromatin in multicellular organisms has proven more challenging to understand or build. Consequently, any near-term progress made in synthesizing human chromosomes is going to teach us a great deal about biology, about disease, and about what makes humans different from other animals. It is still going to take a long time. There isn't any real pressing ethical issue to be had here, yet. Building the ubermench comes later. You can be sure, however, that any federally funded project to build the ubermench will come with a ~2% set aside to pay for plenty of bioethics studies. And that's a good thing. It will happen.

There is, however, an ethical concern here that needs discussing. I care very deeply about getting this right, and about not screwing up the future of biology. As someone who has done multiple tours on bioethics projects in the U.S. and Europe, served as a scientific advisor to various other bioethics projects, and testified before the Presidential Commission on Bioethical Concerns (whew!), I find that many of these conversations are more about the ethicists than the bio. Sure, we need to have public conversations about how we use biology as a technology. It is a very powerful technology. I wrote a book about that. If only we had such involved and thorough ethical conversations about other powerful technologies. Then we would have more conversations about stuff. We would converse and say things, all democratic-like, and it would feel good. And there would be stuff, always more stuff to discuss. We would say the same things about that new stuff. That would be awesome, that stuff, those words. <dreamy sigh> You can quote me on that. <another dreamy sigh>

But on to the technical issues. As I wrote last month, I estimate that the global demand for synthetic DNA (sDNA) to be 4.8 billion bases worth of short oligos and ~1 billion worth of longer double-stranded (dsDNA), for not quite 6 Gigabases total. That, obviously, is the equivalent of a single human duplex genome. Most of that demand is from commercial projects that must return value within a few quarters, which biotech is now doing at eye-popping rates. Any synthetic human genome project is going to take many years, if not decades, and any commercial return is way, way off in the future. Even if the annual growth in commercial use of sDNA were 20% — which is isn't — this tells you, dear reader, that the commercial biotech use of synthetic DNA is never, ever, going to provide sufficient demand to scale up production to build many synthetic human genomes. Or possibly even a single human genome. The government might step in to provide a market to drive technology, just as it did for the human genome sequencing project, but my judgement is that the scale mismatch is so large as to be insurmountable. Even while sDNA is already a commodity, it has far more value in reprogramming crops and microbes with relatively small tweaks than it has in building synthetic human genomes. So if this story were only about existing use of biology as technology, you could go back to sleep.

But there is a use of DNA that might change this story, which is why we should be paying attention, even at this late hour on a Friday night.

DNA is, by far, the most sophisticated and densest information storage medium humans have ever come across. DNA can be used to store orders of magnitude more bits per gram than anything else humans have come up with. Moreover, the internet is expanding so rapidly that our need to archive data will soon outstrip existing technologies. If we continue down our current path, in coming decades we would need not only exponentially more magnetic tape, disk drives, or flash memory, but exponentially more factories to produce these storage media, and exponentially more warehouses to store them. Even if this is technically feasible it is economically implausible. But biology can provide a solution. DNA exceeds by many times even the theoretical capacity of magnetic tape or solid state storage.

A massive warehouse full of magnetic tapes might be replaced by an amount of DNA the size of a sugar cube. Moreover, while tape might last decades, and paper might last millennia, we have found intact DNA in animal carcasses that have spent three-quarters of a million years frozen in the Canadian tundra. Consequently, there is a push to combine our ability to read and write DNA with our accelerating need for more long-term information storage. Encoding and retrieval of text, photos, and video in DNA has already been demonstrated. (Yes, I am working on one of these projects, but I can't talk about it just yet. We're not even to the embargo stage.) 

Governments and corporations alike have recognized the opportunity. Both are funding research to support the scaling up of infrastructure to synthesize and sequence DNA at sufficient rates.

For a “DNA drive” to compete with an archival tape drive today, it needs to be able to write ~2Gbits/sec, which is about 2 Gbases/sec. That is the equivalent of ~20 synthetic human genomes/min, or ~10K sHumans/day, if I must coin a unit of DNA synthesis to capture the magnitude of the change. Obviously this is likely to be in the form of either short ssDNA, or possibly medium-length ss- or dsDNA if enzymatic synthesis becomes a factor. If this sDNA were to be used to assemble genomes, it would first have to be assembled into genes, and then into synthetic chromosomes, a non trivial task. While this would be hard, and would to take a great deal of effort and PhD theses, it certainly isn't science fiction.

But here, finally, is the interesting bit: the volume of sDNA necessary to make DNA information storage work, and the necessary price point, would make possible any number of synthetic genome projects. That, dear reader, is definitely something that needs careful consideration by publics. And here I do not mean "the public", the 'them' opposed to scientists and engineers in the know and in the do (and in the doo-doo, just now), but rather the Latiny, rootier sense of "the people". There is no them, here, just us, all together. This is important.

The scale of the demand for DNA storage, and the price at which it must operate, will completely alter the economics of reading and writing genetic information, in the process marginalizing the use by existing multibillion-dollar biotech markets while at the same time massively expanding capabilities to reprogram life. This sort of pull on biotechnology from non-traditional applications will only increase with time. That means whatever conversation we think we are having about the calm and ethical development biological technologies is about to be completely inundated and overwhelmed by the relentless pull of global capitalism, beyond borders, probably beyond any control. Note that all the hullabaloo so far about synthetic human genomes, and even about CRISPR editing of embryos, etc., has been written by Western commentators, in Western press. But not everybody lives in the West, and vast resources are pushing development of biotechnology outside of the of West. And that is worth an extended public conversation.

So, to sum up, have fun with all the talk of secret genome synthesis. That's boring. I am going off the grid for the rest of the weekend to pester litoral invertebrates with my daughter. You are on your own for a couple of days. Reporters, you are all awesome, make of the above what you will. Also: you are all awesome. When I get back to the lab on Monday I will get right on with fabricating the ubermench for fun and profit. But — shhh — that's a secret.

Staying Sober about Science

The latest issue of The Hastings Center Report carries an essay of mine, "Staying Sober about Science" (free access after registration), about my thoughts on New Directions: The Ethics of Synthetic Biology and Emerging Technologies (PDF) from The Presidential Commission for the Study of Bioethical Issues.

Here is the first paragraph:

Biology, we are frequently told, is the science of the twenty-first century. Authority informs us that moving genes from one organism to another will provide new drugs, extend both the quantity and quality of life, and feed and fuel the world while reducing water consumption and greenhouse gas emissions. Authority also informs that novel genes will escape from genetically modified crops, thereby leading to herbicide-resistant weeds; that genetically modified crops are an evil privatization of the gene pool that will with certainty lead to the economic ruin of small farmers around the world; and that economic growth derived from biological technologies will cause more harm than good. In other words, we are told that biological technologies will provide benefits and will come with costs--with tales of both costs and benefits occasionally inflated--like every other technology humans have developed and deployed over all of recorded history.

And here are a couple of other selected bits:

Overall, in my opinion, the report is well considered. One must commend President Obama for showing leadership in so rapidly addressing what is seen in some quarters as a highly contentious issue. However, as noted by the commission itself, much of the hubbub is due to hype by both the press and certain parties interested in amplifying the importance of the Venter Institute's accomplishments. Certain scientists want to drive a stake into the heart of vitalism, and perhaps to undermine religious positions concerning the origin of life, while "civil society" groups stoke fears about Frankenstein and want a moratorium on research in synthetic biology. Notably, even when invited to comment by the commission, religious groups had little to say on the matter.

The commission avoided the trap of proscribing from on high the future course of a technology still emerging from the muck. Yet I cannot help the feeling that the report implicitly assumes that the technology can be guided or somehow controlled, as does most of the public discourse on synthetic biology. The broader history of technology, and of its regulation or restriction, suggests that directing its development would be no easy task.8 Often technologies that are encouraged and supported are also stunted, while technologies that face restriction or prohibition become widespread and indispensable.

...The commission's stance favors continued research in synthetic biology precisely because the threats of enormous societal and economic costs are vague and unsubstantiated. Moreover, there are practical implications of continued research that are critical to preparing for future challenges. The commission notes that "undue restriction may not only inhibit the distribution of new benefits, but it may also be counterproductive to security and safety by preventing researchers from developing effective safeguards."12 Continued pursuit of knowledge and capability is critical to our physical and economic security, an argument I have been attempting to inject into the conversation in Washington, D.C., for a decade. The commission firmly embraced a concept woven into the founding fabric of the United States. In the inaugural State of the Union Address in 1790, George Washington told Congress "there is nothing which can better deserve your patronage than the promotion of science and literature. Knowledge is in every country the surest basis of publick happiness."13

The pursuit of knowledge is every bit as important a foundation of the republic as explicit acknowledgment of the unalienable rights of life, liberty, and the pursuit of happiness. Science, literature, art, and technology have played obvious roles in the cultural, economic, and political development of the United States. More broadly, science and engineering are inextricably linked with human progress from a history of living in dirt, disease, and hunger to . . . today. One must of course acknowledge that today's world is imperfect; dirt, disease, and hunger remain part of the human experience. But these ills will always be part of the human experience. Overall, the pursuit of knowledge has vastly improved the human condition. Without scientific inquiry, technological development, and the economic incentive to refine innovations into useful and desirable products, we would still be scrabbling in the dirt, beset by countless diseases, often hungry, slowly losing our teeth.

There's more here.

References:

8. R. Carlson, Biology Is Technology: The Promise, Peril, and New Business of Engineering Life (Cambridge, Mass.: Harvard University Press, 2010).

12. Presidential Commission for the Study of Bioethical Issues, New Directions, 5.

13. G. Washington, "The First State of the Union Address," January 8, 1790, http://ahp.gatech.edu/first_state_union_1790.html.

Shame On You, Portland!

What Happened to March?  I got on a plane this morning headed for New York, but somehow arrived on April 1st.  It's the only explanation for this:

Portland hurts Tibetans
(China Daily)
Updated: 2010-03-11 07:51

While many in the international community are watching with anxiety to see if Washington moves to repair its ties with Beijing, a reckless decision by an American city is rubbing salt into the unhealed wound of the world's most important bilateral relations.

The city of Portland, Oregon, proclaimed Wednesday, March 10, their "Tibet Awareness Day" despite strong opposition from the Chinese government.

While most people and most countries in the world recognize Tibet as part of China, the decision by the American city interferes in China's internal affairs and is an open defiance of China's state sovereignty.

It could have an adverse effect on Sino-US relations, which has yet to recover from major deterioration following Washington's $6.4-billion arms sale to Taiwan and US President Barack Obama's meeting with the Dalai Lama.

The designation of the "Tibet Awareness Day" was apparently orchestrated by the Dalai Lama clique, which has been engaged in activities aimed to separate China and undermine Tibet's stability in the guise of religion.

It is still beyond our belief that politicians in Portland have chosen to celebrate a handful of fanatics trumpeting Tibet independence while turning a blind eye to either history or the status quo of present-day Tibet. History has told us that Tibet has always been a part of China, and there is ample evidence proving the fact that Tibetan people now enjoy a much better life and enjoy the full freedom of religion.

Americans are well-known for putting individual freedom above everything. While the city of Portland entertains a few Tibet separatists, has it ever occurred to its decision-makers that their move are infringing on the interest of 2.8-million Tibetans here in China?

Whither Gene Patents?

Wired and GenomeWeb (subscription only) have a bit of reporting on arguments in a case that will probably substantially affect patents on genes.  The case is Association of Molecular Pathology , et al. v. US Patent and Trademark Office, otherwise known as "the BRCA1 case", which seeks to overturn a patent held by Myriad Genetics on a genetic sequence correlated with breast cancer.

Here is a brief summary of what follows: I have never understood how naturally occurring genes can be patentable, but at present patents are the only way to stake out a property right on genes that are hacked or, dare I say it, "engineered".  So until IP law is changed to allow some other form of protection on genes, patents are it.

The ACLU is requesting a summary judgment that the patent in question be overturned without a trial.  Success in that endeavor would have immediate and enormous effect on the biotech industry as a whole, and I doubt the ACLU is going to get that in one go.  (Here is the relevant recent ACLU press release.)

However, the lawsuit explicitly addresses the broader question of whether any patents should have been granted in the first place on human genes.  This gets at the important question of whether isolating and purifying a bit of natural DNA counts as an invention.  Myriad is arguing that moving DNA out of the human genome and into a plasmid vector counts as sufficient innovation.  This has been at the core of arguments supporting patents on naturally occurring genes for decades, and it has never made sense to me for several reasons.  First, changing the context of a naturally occurring substance does not constitute an invention -- purifying oxygen and putting it in a bottle would never be patentable.  US case law is very clear on this matter.  Second, moving the gene to a new context in a plasmid or putting into a cell line for expression and culturing doesn't change its function.  In fact, the whole point of the exercise would be to maintain the function of the gene for study, which is sort of the opposite of invention.  Nonetheless, Myriad wants to maintain its monopoly.  But their arguments just aren't that strong.

GenomeWeb reports that defense attorney Brian Poissant, argued that "'women would not even know they had BRCA gene if it weren't discovered'under a system that incentivizes patents."  This is, frankly, and with all due respect, a manifestly stupid argument.  Mr. Poissant is suggesting that all of science and technology would stop without the incentive of patents.  Given that most research doesn't result in a patent, and given that most patent application are rejected, Mr. Poissant's argument is on its face inconsistent with reality.  He might have tried to argue more narrowly that developing a working diagnostic assays requires a guarantee on investment through the possession of the monopoly granted by a patent.  But he didn't do that.  To be sure, the assertion that the particular gene under debate in this case would have gone undiscovered without patents is an untestable hypothesis.  But does Mr. Poissant really want the judge to believe that scientists around the world would have let investigation into that gene and disease lie fallow without the possibility of a patent?  As I suggested above, it just isn't a strong argument.  But we can grind it further into the dust.

Mr. Poissant also argued "that if a ruling were as broadly applied here as the ACLU would like then it could 'undermine the entire biotechnology sector.'"  This is, at best, an aggressive over generalization.  As I have described several times over the past couple of years (here and here, for starters), even drugs are only a small part of the revenues from genetically modified systems.  Without digging into the undoubtedly messy details, a quick troll of Google suggests that molecular diagnostics as a whole generate only $3-4 billion a year, and at a guess DNA tests are probably a good deal less than half of this.  But more importantly, of the nearly ~2% of US GDP (~$220-250 billion) presently derived from biological technologies, the vast majority are from drugs, plants, or bacteria that have been hacked with genes that themselves are hacked.  That is, both the genes and the host organisms have been altered in a way that is demonstrably dependent on human ingenuity.  What all this means is that only a relatively small fraction of "the entire biotechnology sector" is related to naturally occurring genes in the first place.   

I perused some of the court filings (via the Wired article), and the defense needs to up its game.  Perhaps they think the weight of precedent is on their side.  I would not be as confident as they are. 

But neither is the plaintiff putting its best foot forward.  Even though I like the analysis made comparing DNA patents to attempts to patent fresh fruit, it is unclear to me that the ACLU is being sufficiently careful with both its logic and its verbiage.  In the press release, ACLU attorey Chris Hansen is quoted as saying "Allowing patents on genetic material imposes real and severe limits on scientific research, learning and the free flow of information."  GenomeWeb further quotes the ACLU's Hansen as saying "Patenting human genes is like patenting e=mc2, blood, or air."

As described above, I agree that patenting naturally occurring genes doesn't make a lot of sense.  But we need some sort of property right as an incentive for innovators.  Why should I invest in developing a new biological technology, relying on DNA sequences that have never occurred in nature, if anybody can make off with the sequence (and revenues)?  As it happens, I am not a big fan of patents -- they cost too damn much.  At present, the patent we are pursuing at Biodesic is costing about ten times as much as the capital cost of developing the actual product.  Fees paid to lawyers account for 90% of that.  If it were realistically possible to engage the patent office without a lawyer, then the filing fees would be about the same as the capital cost of development, which seems much more reasonable to me.

I go into these issues at length in the book.  Unfortunately, without Congressional action, there doesn't seem to be much hope for improvement.  And, of course, the direction of any Congressional action will be dominated by large corporations and lawyers.  So much for the little guy.

Are We Cutting Off Our GM Nose to Spite Our

News today that a federal judge has rejected the approval of GM sugar beets by the USDA.  The ruling stated that the government should have done an environmental impact statement, and is similar to a ruling two years ago that led to halting the planting of GM alfalfa.  As in that case, according to the New York Times, "the plaintiffs in the [sugar beet] lawsuit said they would press to ban planting of the biotech beets, arguing that Judge White's decision effectively revoked their approval and made them illegal to grow outside of field trials."  The concern voiced by the plaintiffs, and recognized by the judge, is that pollen from the GM beets might spread transgenes that contaminate GM-free beets.

A few other tidbits from the article: sugar beets now supply about half the US sugar demand, and it seems that GM sugar beets account for about 95% of the US crop (I cannot find any data on the USDA site to support the latter claim).  A spokesman for the nation's largest sugar beet processor claims that food companies, and consumers, have completely accepted sugar from the modified beets -- as they should, because it's the same old sugar molecule. 

I got lured into spending most of my day on this because I noticed that the Sierra Club was one of the plaintiffs.  This surprised me, because the Sierra Club is less of a noisemaker on biotech crops than some of the co-plaintiffs, and usually focuses more on climate issues.  Though there is as yet no press release, digging around the Sierra Club site suggests that the organization wants all GM crops to be tested and evaluated with an impact statement before approval.  But my surprise also comes in part because the best review I can find of GM crops suggests that their growing use is coincident with a substantial reduction in soil loss, carbon emissions, energy use, water use, and overall climate impact -- precisely the sort of technological improvement you might expect the Sierra Club to support.  The reductions in environmental impact -- which range from 20% to 70%, depending on the crop -- come from "From Field to Market" (PDF) published earlier this year by the Keystone Alliance, a diverse collection of environmental groups and companies.  Recall that according to USDA data GM crops now account for about 90% of cotton, soy, and corn.  While the Keystone report does not directly attribute the reduction in climate impacts to genetic modification, a VP at Monsanto recently made the connection explicit (PDF of Kevin Eblen's slides at the 2009 International Farm Management Congress).  Here is some additional reporting/commentary.

So I find myself being pulled into exploring the cost/benefit analysis of biotech crops sooner than I had wanted.  I dealt with this issue in Biology is Technology by punting in the afterword:
 

The broader message in this book is that biological technologies are beginning to change both our economy and our interaction with nature in new ways.  The global acreage of genetically modified (GM) crops continues to grow at a very steady rate, and those crops are put to new uses in the economy every day.  One critical question I avoided in the discussion of these crops is the extent to which GM provides an advantage over unmodified plants.  With more than ten years of field and market experience with these crops in Asia and North and South America, the answer would appear to be yes.  Farmers who have the choice to plant GM crops often do so, and presumably they make that choice because it provides them a benefit.  But public debate remains highly polarized.  The Union of Concerned Scientists recently released a review of published studies of GM crop yields in which the author claimed to "debunk" the idea that genetic modification will "play a significant role in increasing food production"  The Biotechnology Industry Organization responded with a press release claiming to "debunk" the original debunking.  The debate continues.

Obviously we will all be talking about biotech crops for years to come.  I don't see how we are going to address the combination of 1) the need for more biomass for fuel and materials, 2) the mandatory increase in crop yields necessary to feed human populations, and 3) the need to reduce our climatic impacts, without deploying biotech crops at even larger scales than we have so far.  But I am also very aware that nobody, but nobody, truly understands how a GM organism will behave when released into the wild.

We do live in interesting times.

And the Innovation Continues...Starting with Shake and Bake Meth!

My first published effort at tracking the pace and proliferation of biological technologies (PDF) was published in 2003.  In that paper, I started following the efforts of the DEA and the DOJ to restrict production and use of methamphetamine, and also started following the response to those efforts as an example of proliferation and innovation driven by proscription.

The story started circa 2002 with 95% of meth production in Mom and Pop operations that made less than 5 kg per year.  Then the US Government decided to restrict access to the precursor chemicals and also to crack down on domestic production.  As I described in 2008, these enforcement actions did sharply reduce the number of "clandestine laboratory incidents" in the US, but those actions also resulted in a proliferation of production across the US border, and a consequently greater flow of drugs across the border.  Domestic consumption continued to increase.  The DEA acknowledged that its efforts contributed to the development of a drug production and distribution infrastructure that is, "[M]ore difficult for local law enforcement agencies to identify, investigate, and dismantle because[it is] typically much more organized and experienced than local independent producers and distributors."  The meth market thus became both bigger and blacker.

Now it turns out that the production infrastructure for meth has been reduced to a 2-liter soda bottle.  As reported by the AP in the last few days, "The do-it-yourself method creates just enough meth for a few hits, allowing users to make their own doses instead of buying mass-produced drugs from a dealer."  The AP reporters found that meth-related busts are on the increase in 2/3 of the states examined.  So we are back to distributed meth production -- using methods that are even harder to track and crack than bathtub labs -- thanks to innovation driven by attempts to restrict/regulate/proscribe access to a technology.

And in Other News...3D Printers for All

Priya Ganapati recently covered the latest in 3D printing for Wired.  The Makerbot looks to cost about a grand, depending on what you order, and how much of it you build yourself.  It prints all sorts of interesting plastics.  According to the wiki, the "plastruder" print head accepts 3mm plastic filament, so presumably the smallest voxel is 3mm on a side.  Alas this is quite macroscopic, but even if I can't yet print microfluidic components I can imagine all sorts of other interesting applications.  The Makerbot is related to the Reprap, which can now (mostly) print itself.  Combine the two, and you can print a pretty impressive -- and always growing -- list of plastic and metal objects (see the Thingiverse and the Reprap Object Library).

How does 3D printing tie into drug proscription?  Oh, just tangentially, I suppose.  I make more of this in the book.  More power to create in more creative people's hands.  Good luck trying to ban anything in the future.

The Origin of Moore's Law and What it May (Not) Teach Us About Biological Technologies

While writing a proposal for a new project, I've had occasion to dig back into Moore's Law and its origins.  I wonder, now, whether I peeled back enough of the layers of the phenomenon in my book.  We so often hear about how more powerful computers are changing everything.  Usually the progress demonstrated by the semiconductor industry (and now, more generally, IT) is described as the result of some sort of technological determinism instead of as the result of a bunch of choices -- by people -- that produce the world we live in.  This is on my mind as I continue to ponder the recent failure of Codon Devices as a commercial enterprise.  In any event, here are a few notes and resources that I found compelling as I went back to reexamine Moore's Law.

What is Moore's Law?

First up is a 2003 article from Ars Technica that does a very nice job of explaining the why's and wherefore's: "Understanding Moore's Law".  The crispest statement within the original 1965 paper is "The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year."  At it's very origins, Moore's Law emerged from a statement about cost, and economics, rather than strictly about technology.

I like this summary from the Ars Technica piece quite a lot:

Ultimately, the number of transistors per chip that makes up the low point of any year's curve is a combination of a few major factors (in order of decreasing impact):

  1. The maximum number of transistors per square inch, (or, alternately put, the size of the smallest transistor that our equipment can etch),
  2. The size of the wafer
  3. The average number of defects per square inch,
  4. The costs associated with producing multiple components (i.e. packaging costs, the costs of integrating multiple components onto a PCB, etc.)

In other words, it's complicated.  Notably, the article does not touch on any market-associated factors, such as demand and the financing of new fabs.

The Wiki on Moore's Law has some good information, but isn't very nuanced.

Next, here an excerpt from an interview Moore did with Charlie Rose in 2005:

Charlie Rose:     ...It is said, and tell me if it's right, that this was part of the assumptions built into the way Intel made it's projections. And therefore, because Intel did that, everybody else in the Silicon Valley, everybody else in the business did the same thing. So it achieved a power that was pervasive.

Gordon Moore:   That's true. It happened fairly gradually. It was generally recognized that these things were growing exponentially like that. Even the Semiconductor Industry Association put out a roadmap for the technology for the industry that took into account these exponential growths to see what research had to be done to make sure we could stay on that curve. So it's kind of become a self-fulfilling prophecy.

Semiconductor technology has the peculiar characteristic that the next generation always makes things higher performance and cheaper - both. So if you're a generation behind the leading edge technology, you have both a cost disadvantage and a performance disadvantage. So it's a very non-competitive situation. So the companies all recognize they have to stay on this curve or get a little ahead of it.

Keeping up with 'the Law' is as much about the business model of the semiconductor industry as about anything else.  Growth for the sake of growth is an axiom of western capitalism, but it is actually a fundamental requirement for chipmakers.  Because the cost per transistor is expected to fall exponentially over time, you have to produce exponentially more transistors to maintain your margins and satisfy your investors.  Therefore, Intel set growth as a primary goal early on.  Everyone else had to follow, or be left by the wayside.  The following is from the recent Briefing in The Economist on the semiconductor industry:

...Even the biggest chipmakers must keep expanding. Intel todayaccounts for 82% of global microprocessor revenue and has annual revenues of $37.6 billion because it understood this long ago. In the early 1980s, when Intel was a $700m company--pretty big for the time--Andy Grove, once Intel's boss, notorious for his paranoia, was not satisfied. "He would run around and tell everybody that we have to get to $1 billion," recalls Andy Bryant, the firm's chief administrative officer. "He knew that you had to have a certain size to stay in business."

Grow, grow, grow

Intel still appears to stick to this mantra, and is using the crisis to outgrow its competitors. In February Paul Otellini, its chief executive, said it would speed up plans to move many of its fabs to a new, 32-nanometre process at a cost of $7 billion over the next two years. This, he said, would preserve about 7,000 high-wage jobs in America. The investment (as well as Nehalem, Intel's new superfast chip for servers, which was released on March 30th) will also make life even harder for AMD, Intel's biggest remaining rival in the market for PC-type processors.

AMD got out of the atoms business earlier this year by selling its fab operations to a sovereign wealth fund run by Abu Dhabi.  We shall see how they fare as a bits-only design firm, having sacrificed their ability to themselves push (and rely on) scale.

Where is Moore's Law Taking Us?

Here are a few other tidbits I found interesting:

Re the oft-forecast end of Moore's Law, here is Michael Kanellos at CNET grinning through his prose: "In a bit of magazine performance art, Red Herring ran a cover story on the death of Moore's Law in February--and subsequently went out of business."

And here is somebody's term paper (no disrespect there -- it is actually quite good, and is archived at Microsoft Research) quoting an interview with Carver Mead:

Carver Mead (now Gordon and Betty Moore Professor of Engineering and Applied Science at Caltech) states that Moore's Law "is really about people's belief system, it's not a law of physics, it's about human belief, and when people believe in something, they'll put energy behind it to make it come to pass." Mead offers a retrospective, yet philosophical explanation of how Moore's Law has been reinforced within the semiconductor community through "living it":

After it's [Moore's Law] happened long enough, people begin to talk about it in retrospect, and in retrospect it's really a curve that goes through some points and so it looks like a physical law and people talk about it that way. But actually if you're living it, which I am, then it doesn't feel like a physical law. It's really a thing about human activity, it's about vision, it's about what you're allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe what is possible. So here's an example where Gordon [Moore], when he made this observation early on, he really gave us permission to believe that it would keep going. And so some of us went off and did some calculations about it and said, 'Yes, it can keep going'. And that then gave other people permission to believe it could keep going. And [after believing it] for the last two or three generations, 'maybe I can believe it for a couple more, even though I can't see how to get there'. . . The wonderful thing about [Moore's Law] is that it is not a static law, it forces everyone to live in a dynamic, evolving world.

So the actual pace of Moore's Law is about expectations, human behavior, and, not least, economics, but has relatively little to do with the cutting edge of technology or with technological limits.  Moore's Law as encapsulated by The Economist is about the scale necessary to stay alive in the semiconductor manufacturing business.  To bring this back to biological technologies, what does Moore's Law teach us about playing with DNA and proteins?  Peeling back the veneer of technological determinism enables us (forces us?) to examine how we got where we are today. 

A Few Meandering Thoughts About Biology

Intel makes chips because customers buy chips.  According to The Economist, a new chip fab now costs north of $6 billion.  Similarly, companies make stuff out of, and using, biology because people buy that stuff.  But nothing in biology, and certainly not a manufacturing plant, costs $6 billion.

Even a blockbuster drug, which could bring revenues in the range of $50-100 billion during its commercial lifetime, costs less than $1 billion to develop.  Scale wins in drug manufacturing because drugs require lots of testing, and require verifiable quality control during manufacturing, which costs serious money.

Scale wins in farming because you need...a farm.  Okay, that one is pretty obvious.  Commodities have low margins, and unless you can hitch your wagon to "eat local" or "organic" labels, you need scale (volume) to compete and survive.

But otherwise, it isn't obvious that there are substantial barriers to participating in the bio-economy.  Recalling that this is a hypothesis rather than an assertion, I'll venture back into biofuels to make more progress here.

Scale wins in the oil business because petroleum costs serious money to extract from the ground, because the costs of transporting that oil are reduced by playing a surface-to-volume game, and because thermodynamics dictates that big refineries are more efficient refineries.  It's all about "steel in the ground", as the oil executives say -- and in the deserts of the Middle East, and in the Straights of Malacca, etc.  But here is something interesting to ponder: oil production may have maxed out at about 90 million barrels a day (see this 2007 article in the FT, "Total chief warns on oil output").  There may be lots of oil in the ground around the world, but our ability to move it to market may be limited.  Last year's report from Bio-era, "The Big Squeeze", observed that since about 2006, the petroleum market has in fact relied on biofuels to supply volumes above the ~90 million per day mark.  This leads to an important consequence for distributed biofuel production that only recently penetrated my thick skull.

Below the 90 million barrel threshold, oil prices fall because supply will generally exceed demand (modulo games played by OPEC, Hugo Chavez, and speculators).  In that environment, biofuels have to compete against the scale of the petroleum markets, and margins on biofuels get squeezed as the price of oil falls.  However, above the 90 million per day threshold, prices start to rise rapidly (perhaps contributing to the recent spike, in addition to the actions of speculators).  In that environment, biofuels are competing not with petroleum, but with other biofuels.  What I mean is that large-scale biofuels operations may have an advantage when oil prices are low because large-scale producers -- particularly those making first-generation biofuels, like corn-based ethanol, that require lots of energy input -- can eke out a bit more margin through surface to volume issues and thermodynamics.  But as prices rise, both the energy to make those fuels and the energy to move those fuels to market get more expensive.  When the price of oil is high, smaller scale producers -- particularly those with lower capital requirements, as might come with direct production of fuels in microbes -- gain an advantage because they can be more flexible and have lower transportation costs (being closer to the consumer).  In this price-volume regime, petroleum production is maxed out and small scale biofuels producers are competing against other biofuels producers since they are the only source of additional supply (for materials, as well as fuels).

This is getting a bit far from Moore's Law -- the section heading does contain the phrase "meandering thoughts" -- I'll try to bring it back.  Whatever the origin of the trends, biological technologies appear to be the same sort of exponential driver for the economy as are semiconductors.  Chips, software, DNA sequencing and synthesis: all are infrastructure that contribute to increases in productivity and capability further along the value chain in the economy.  The cost of production for chips (especially the capital required for a fab) is rising.  The cost of production for biology is falling (even if that progress is uneven, as I observed in the post about Codon Devices).&nb sp; It is generally becoming harder to participate in the chip business, and it is generally becoming easier to participate in the biology business.  Paraphrasing Carver Mead, Moore's Law became an organizing principal of an industry, and a driver of our economy, through human behavior rather than through technological predestination.  Biology, too, will only become a truly powerful and influential technology through human choices to develop and deploy that technology.  But access to both design tools and working systems will be much more distributed in biology than in hardware.  It is another matter whether we can learn to use synthetic biological systems to improve the human condition to the extent we have through relying on Moore's Law. 

Cheery Reading: "WORLD AT RISK The Report of the Commission on the Prevention of WMD Proliferation and Terrorism"

In case you haven't seen the headlines the lase couple of days, Bob Graham and Jim Talent say we are doomed.  Mostly.  Sort of.  Maybe?

Here is the page to download the report.  In summary, the commission predicts an attack using a weapon of mass destruction with in the next five years.  They are more worried about biological weapons than nuclear ones.

Despite the grim tone of most of the text, here is something useful to squawk back at Chicken Little:

...One should not oversimplify or exaggerate the threat of bioterrorism. Developing a biological weapon that can inflict mass casualties is an intricate undertaking, both technically and operationally complex. 

That is among the more optimistic statements in the entire document.

I caught Bob Graham on the Colbert Report last night, and the interview helped me figure out what has been bugging me about the language used by the report and its authors as they talk to the press.  No, not the part where Graham and Colbert -- two grown men in suit and tie -- used copies of the report like GI Joe figures in desktop combat (see 2:30 -- that brief interlude was enlightening in a different way):

The lightbulb went off when Graham said "The most important thing we can do is make sure that we, and the rest of the world, are locking down all the nuclear and biological material so that it is not capable of leaking into the hands of terrorists."

That sounds great, and the report goes on at length about securing BSL-3 and -4 facilities here in the US so that nasty bugs are kept behind locked doors, doors that are guarded by guys with visible guns.  That constitutes a particular kind of deterrence, which is fine.  As I have spent far too much of my life working in clean rooms trussed up in bunny suits, I can only feel sympathy for the folks who will have to deal with that security and suit up to work in the lab every day.  But those bugs are dangerous, and biosafety in those facilities is no joke.  The near-term threat is undoubtedly from bugs that already exist in labs.

But this is where things start to go off the rails for me.  Graham didn't have a lot of time with Colbert, but his language was disturbingly absolute.  I am concerned the Commission's views on biological technologies aredysfunctionally bipolar.  Here is what I mean: Even though the text of report reassures me that the people who actually put words on the page have a sense of how far and how fast biological technologies are proliferating (which I get to below), the language used by the official spokesman involves "locking down all the biological materials".  I worry that "locking down" anything might be construed in Washington DC, or by the populace, as constituting sufficient security measures.  See my article from last year "Laying the foundations for a bio-economy" for an update on what has happened as a result trying to "lock down" methamphetamine production in the US.  Short summary: There is more meth available on the streets, and the DEA acknowledges that its efforts have created an environment in which it actually has worse intelligence about who is making the drug and how it gets distributed.

Frankly, I haven't quite sorted out all of the things that bother me about the report, the way we talk about security in this country, and the inevitable spread of powerful biological technologies.  What follows are some additional notes and ruminations on the matter.   

Here is what the text of the report has to say about the threat from DNA synthesis technologies:

The only way to rule out the harmful use of advances in biotechnology would be to stifle their beneficial applications as well--and that is not a realistic option. Instead, the dual-use dilemma associated with the revolution in biology must be managed on an ongoing basis. As long as rapid innovations in biological science and the malevolent intentions of terrorists and proliferators continue on trajectories that are likely to intersect sooner or later, the risk that biological weapons pose to humanity must not be minimized or ignored.

Hmm...well, yes.  I'm glad they acknowledge the fact that in order to benefit from the technology it must be developed further, and that security through proscription will retard that innovation.  I am relieved that this part of the report's recommendations do not include measures I believe would be immediately counterproductive.  The authors later write:

The more that sophisticated capabilities, including genetic engineering and gene synthesis, spread around the globe, the greater the potential that terrorists will use them to develop biological weapons. The challenge for U.S. policymakers is to prevent that potential from becoming a reality by keeping dangerous pathogens--and the equipment, technology, and know-how needed to weaponize them--out of the hands of criminals, terrorists, and proliferant states. 

The charge in the last sentence sounds rather infeasible to me.  Anyway, the Commission then puts responsibility for security on the heads of scientists and engineers working in the life sciences: 

The choice is stark. The life sciences community can wait until a catastrophic biological attack occurs before it steps up to its security responsibilities. Or it can act proactively in its own enlightened self-interest, aware that the reaction of the political system to a major bioterrorist event would likely be extreme and even draconian, resulting in significant harm to the scientific enterprise.

...ACTION: The Department of Health and Human Services and Congress should promote a culture of security awareness in the life sciences community.

Members of the life sciences community--universities, medical and veterinary schools, nongovernmental biomedical research institutes, trade associations, and biotechnology and pharmaceutical companies--must foster a bottom-up effort to sensitize researchers to biosecurity issues and concerns. Scientists should understand the ethical imperative to "do no harm," strive to anticipate the potential consequences of their research, and design and conduct experiments in a way that minimizes safety and security risks.

(This bit sounds like the Commission heard from Drew Endy.)

...The currently separate concepts of biosafety and biosecurity should be combined into a unified conceptual framework of laboratory risk management. This framework should be integrated into a program of mandatory education and training for scientists and technicians in the life sciences field, whether they are working in the academy or in industry. Such training should begin with advanced college and graduate students andextend to career scientists. The U.S. government should also fund the development of educational materials and reference manuals on biosafety and biosecurity issues. At the same time, the responsibilities of laboratory biosafety officers should be expanded to include laboratory security and oversight of select agents, and all biosafety officers should be tested and certified by a competent government authority.

The phrase "culture of security awareness" appears frequently.  This creeps me out more than a bit, particularly given our government's recent exhortations to keep an eye on our neighbors.  You never know who might be a sleeper.  Or a sleep-walking bioterrorist.  I make this point not entirely in jest.  Who wants to live in such a paranoid culture?  Particularly when it is not at all clear that such paranoia makes us safer.

To be fair, I called for something not too dissimilar in 2003 in The Pace and Proliferation of Biological Technologies.  It only makes sense to keep an eye out for potential bioterror and bioerror, and we should have some sort of educational framework to make sure that people are aware of the potential hazards as they hack DNA.  But seeing that language in a report from a legislatively-established body makes me start imagining Orwellian propaganda posters on the walls of labs around the country.  Ick.  That is no way to foster communication and innovation.

On a different topic, here is something that opened my eyes. The report contains a story about a Russian -- someone in charge of weighing out uranium for his coworkers -- who was able to continuously steal small amounts of fissile materiel because the scales were officially recognized to be calibrated only to within 3%.  By withholding a little each time, he amassed a stash of 1.6 kg of "90 percent enriched uranium", while the official books showed no missing materiel.  Fortunately the fellow was caught, because while he was a clever thief he was a not-so-clever salesman.  As part of subsequent non-proliferation efforts, the US government paid for more accurate scales in order to prevent another incident of stealing "a bomb's worth of uranium, bit by bit".  Holy shit.

It is nice to hear that this sort of leak has been plugged for the nuclear threat.  I hope our government clearly understands that such plugs are few and far between for biological threats.

"Tracking the spread of biological technologies"

I have an editorial in the Bulletin of the Atomic Scientists dated 21 November, 2008 (Open Access).

Regular readers will recall that I do not see that history provides useful examples of effective regulation of distributed technologies.  Here are the final 'graphs from the editorial:

The counterargument typically relies on inspiring fear and encouraging proactivity. We cannot wait for perfect policy to implement security measures, the thinking goes. Yet this argument obscures the investigation and debate that must come first: Is it at all possible to slow down the actions of potential aggressors? Will regulation increase knowledge of threats or further obscure them? Finally, will these efforts, whether successful or not, also retard crucial research required to produce countermeasures for both natural and artificial threats?

Most proponents of regulation have not addressed these questions. Greater knowledge of potential threats is clearly desirable. Reducing the threat from bioerror and bioterror is an even more important goal. Formulating effective policy requires acknowledging the pace and proliferation of biological technologies as well as carefully weighing any potential negative impacts of action.