Massive Technology Failure

If anybody reading this has emailed me within the last 14 days, I haven't received it.  After 5 years, and many, many miles, my trusty 12" Powerbook had a serious hardware failure a couple of nights ago.  Took all my email with it.  Then, when I was trying to rescue some data, the failing Powerbook took out my backup drive.  Ouch.  I seem to have a talent for finding really good technology problems...

I have normal email access now, but if you don't receive a reply from me on something sent before this weekend, please send again.

And remember, kids, back up, back up, and back up again.

Publication of the Venter Institute's synthetic bacterial chromosome

Craig Venter and his crew have just published a paper in Science demonstrating synthesis of a complete bacterial chromosome.  Venter let the cat out of the bag late last year in an interview with The Guardian, which I wrote about a few weeks ago, here: "Updated Longest Synthetic DNA Plot".

As a technical achievement, the paper, by Gibson, et al., is actually quite nice.  The authors ordered ~5kB gene cassettes from Blue Heron, DNA 2.0, and GENEART, and then used a parallel method to assemble those cassettes into the ~580kB full genome in just a few steps.  They contrast their method, which may be generalizable to any sequence, with previous research:

All [the previous] methods used sequential stepwise addition of segments to reconstruct a donor genome within a recipient bacterium. The sequential nature of these constructions makes such methods slower than the purely hierarchical scheme that we employed.

The Itaya and Holt groups found that the bacterial recipient strains were unable to tolerate some portions of the donor genome to be cloned, for example ribosomal RNA operons. In contrast, we found that the M. genitalium ribosomal RNA genes could be stably cloned in E. coli BACs. We were able to clone the entire M. genitalium genome, and also to assemble the four quarter genomes in a single step, using yeast as a recipient host. However, we do not yet know how generally useful yeast will be as a recipient for bacterial genome sequences.

The team was evidently unable to successfully use the synthetic chromosome to boot up a new organism.  It turns out that one of the techniques they developed in fact gets in the way of finishing this final step.  There is an interesting note, added in proof, at the end of the paper:

While this paper was in press, we realized that the TARBAC vector in our sMgTARBAC37 clone interrupts the gene for the RNA subunit of RNase P (rnpB). This confirms our speculation that the vector might not be at a suitable site for subsequent transplantation experiments.

So, Gibson, et al., made really interesting technical progress in developing a method to assemble large, (seemingly) arbitrary sequences.  However, their goal of booting up a synthetic chromosome using the assembly technique is presently stymied by one of the technologies they are relying on to propagate the large construct in yeast.  As for the goal of "synthetic life" as defined by constructing a working genome from raw materials, they are close, but not quite there.  Given the many different wasy of manipulating large pieces of DNA within microbes, it won't be long until the Venter Institute team gets there.

Andrew Pollack of the NYT quotes Venter as saying, “What we are doing with the synthetic chromosome is going to be the design process of the future."  This is a bit of a stretch, because no one in their right mind is going to synthesize an entire microbial genome for a real engineering project, with real costs, anytime soon.  Any design process that involves writing whole genomes is going to be WAY in the future.  As I wrote in the "Longest Synthetic DNA" post:

The more interesting numbers are, say, 10-50 genes and 10,00-50,000 bases.  This is the size of a genetic program or circuit that will have interesting economic value for many decades to come.  But while assembling synthetic constructs (plasmids) this size is still not trivial, it is definitely old news.  The question is how will the cost for constructs of this size fall, and when can I have that DNA in days or hours instead of weeks?  And how soon before I can have a desktop box that prints synthetic DNA of this length?  As I have previously noted in this space, there is clear demand for this sort of box, which means that it will happen sooner or later.  Probably sooner.

The Gibson, et al, Science paper doesn't say how many person-hours the project took, nor does it say exactly how much they spent on their synthetic construct (presumably they got a nice volume discount).  The fact that the project isn't actually finished demonstrates that this is hardly a practical engineering challenge that will find a role in the economy anytime soon.

That said, I could certainly be wrong about this assertion, particularly if other technical approaches crop up, as may well happen.  In the NYT story Venter is quoted as saying that, "I will be equally surprised and disappointed if we can’t do it in 2008.”  And they probably will, but what is the real impact of that success? 

The NYT story, by Andrew Pollack, carries the unfortunate title, "Scientists Take New Step Toward Man-Made Life".  Not so much.  Even if Venter and colleagues do get their chromosome working, they will have demonstrated not "man-made" life, but rather a synthetic instruction set running in a pre-existing soup of proteins and metabolites in a pre-existing cell.  It's really no different than getting a synthetic viral genome working in cell culture, which is old news.  Show me a bacterial cell, or something else obviously alive, from an updated Miller-Urey experiment and then I will be really impressed.  Thus the Gibson paper represents a nice technical advance, and a good recipe for doing more science, but not much in the way of a philosophical earthquake.

Without the ability to easily -- very easily -- print genomes and get them into host cells at high efficiency and low cost, building synthetic genomes will remain just interesting science.

The New York Times gets a story title backwards

The story itself is right on the money, mind you -- I highly recommend reading it -- but the title, "An Oil Quandary: Costly Fuel Means Costly Calories", is bass-ackwards.  That title, probably courtesy of an editor, rather than the reporters, would be accurate for ethanol but has the effect before the cause for vegetable oil-based biodiesel.

Indeed, the story is the same as the one Bio-era has been telling for the last year.  "Chomp! Chomp! Fueling a new agribusiness", written (mostly by Jim Newcomb) for CLSA, nailed all the trends early on; rising income, rising meat consumption, grain use for food and feed, water supply issues, carbon emissions, and government mandates for biofuel use.  It all adds up to a big mess, for the time being.

As I wrote last year while in Hong Kong (See "Asia Biofuels Travelblog, Pt. 2"), after having just been on the ground in Malaysia and Singapore, food use has driven the price of of palm and other vegetable oils well above the wholesale price for finished petrodiesel.  Planting more oil palms, even if done on land that has already been cleared (i.e., not on virgin jungle or on drained peat bogs), is unlikely to ease price pressures because demand is climbing much faster than supply could possibly keep up (see the "Travelblog" post for some rough numbers).  In other words, there is plenty of price pressure to keep cutting down forests and draining peat bogs, carbon emissions be damned.  Prices are probably going to stay high for quite a while.

As the NYT story notes, biodiesel refineries are sitting idle all over the place because the feedstock is way too expensive to turn into fuel.  Far better, and more profitable, to eat it.  The heart of the matter is that, as the Times says, "Huge demand for biofuels has created tension between using land to produce fuel and using it for food."  The arable land is the key issue, and the only way the ongoing collision between food and fuel is going to be resolved is by using non-food feedstock to make fuel, to grow that feedstock on land that cannot be used to produce food at market prices, and produce biofuels using new technologies.  Synthetic biology, various grasses, and sugar from Brazil seem to be the way to go (see my earlier posts "The Need for Fuels Produced Using Synthetic Biology" and "The Intersection of Biofuels and Synthetic Biology").  Hmmm...I still need to post something about switchgrass, miscanthus, and prairies.  Maybe next week.

I'm headed to Houston on Monday for a Roundtable on biofuels run by Bio-era, "Biotech Biofuels & the Future of the Oil Industry".  Companies in the oil industry, agbiotech, and synthetic biology will all be there.  Should be interesting.

High Yield Ethanol Fermentation from Synthesis Gas

The New York Times is reporting that GM has directly invested in a waste-to-ethanol company in order to help supply biofuels.  Coskata (another Khosla-funded company) has a proprietary combined industrial-biological process for using synthesis gas (CO and H2) to produce ethanol.  Here is the NYT story, by Matthew Wald.

This announcement is interesting to me for several reasons.  First, it turns out I was told all about the Coskata process late last year (though not the GM investment), but I was so busy I didn't tune in sufficiently and so completely missed the significance.  Oops.

Second, in about 2002, I suggested to GM's upper management that they should start thinking of themselves as a "transportation solutions" company rather than just a company that sells cars, and that they invest in providing alternative fuels to ensure that their advanced technology cars would have something to burn. (As the NDA has long since expired, I will connect the dots and point interested readers to an earlier post of mine on producing hydrogen from waste.)  Think W. Edwards Deming and buggy whip manufacturers -- over the next two decades selling cars by themselves is rapidly going to become a losing business model in developed countries as manufacturing practices change and as carbon becomes a bigger issue.  I don't claim that my suggestion five years ago is what got GM started down this road, but I am certainly interested to see that they have made the decision.

The NYT story quotes a number of people commenting on GM's investment, and I think this is the most interesting one, because it is so wrong:

“I don’t really see the logic of it,” said Christopher Flavin, president of the Worldwatch Institute, a Washington environmental group. “It’s not particularly an industry they know well, or have expertise in.” Companies like G.M., he said, could be more effective by concentrating on the fuel efficiency of their products.

GM is now facing enormous pressure to reduce the carbon emissions from its vehicles, in part by increasing fuel efficiencies.  But that isn't the whole story.  Carbon emissions can fall much faster by switching to new fuels, but the extra cost that goes into building engines able to burn those fuels is wasted without access to the fuel.  My earlier suggestion to GM was in the context of using hydrogen as that fuel, but the argument is the same for any other fuel.  Without a sufficient supply of the fuel, why would anyone bother to pay extra for a vehicle that could have lower emissions if only the fuel were available? 

The Coskata website is rather thin on details, but basically they describe a microbe that can convert CO and H2 to ethanol on the fly.  I am absolutely certain the NDA covering the conversation in which I learned about this is still in effect, which limits my ability to say more than what has been published elsewhere.

What I can say is that, if the technology proves to be as efficient and versatile as is claimed, this strategy makes a great deal of sense.  From the NYT story:

If it can be done economically, the Coskata process has three large advantages over corn-based ethanol, according to General Motors. First, it uses a cheaper feedstock that would not compete with food production. Second, the feedstock is available all over the country, a crucial point since ethanol cannot be shipped from the corn belt to areas of high gasoline demand in existing pipelines.

As I have written in this space many times (see, for example, "The Need for Fuels Produced Using Synthetic Biology"), getting away from competition with food is the most important next step in increasing biofuel production.  Diversifying feedstocks to include waste products is critical.

Finally, it is interesting to speculate about the possibility of combining Coskata's synthesis gas eating microbe with the non-fermentative biofuel synthesis I wrote about last week.  Fermentation produces lots of stuff besides ethanol, and ethanol is toxic to most microbes above minimal concentrations.  Besides, ethanol sucks as a biofuel.  So if you could patch the biosynthesis technology that Gevo (another Khosla-funded company, hmmm...) just licensed from UCLA into a bug that eats synthesis gas, you would have a generalized method for taking any organic trash and converting it via synthesis gas into many useful materials, starting with fuels.  Put all together and what do you get?

Say it all together now: "Distributed Biological Manufacturing" (PDF).

High yield biofuels production using engineered "non-fermentative" pathways in microbes.

A paper in last week's Nature demonstrated a combination of genetic modifications that allowed E. coli to produce isobutanol from glucose at 86% of the theoretical maximum yield.  Please people, slow down!  How am I supposed to finish writing my book if you keep innovating at this rate?

I jest, of course.  Mostly.

Atsumi, et al., exploit non-fermentative synthesis to maximize the production of molecules that could be used as biofuels, while minimizing parasitic side reactions that serve to "distract" their microbial work horse (here is the abstract in Nature).  The authors deleted 7 native genes, added several more from yeast and other microbes, and also added a plasmid containing what looks like another 6 or so genes and regulatory elements.  The plasmid was used to overexpress genes in a native E. coli synthesis pathway.  So call it ~15 total changes.

While the various genetic changes were made using traditional cloning techniques, rather than by synthesis, I would still put this project squarely in the category of synthetic biology.  True, there is no evident quantitative modeling, but it is still a great story.  I am impressed by the flavor of the article, which makes it sound like the project was cooked up by staring at a map of biochemical process (here is a good one at ExPASy -- you can click on the map for expanded views) and saying, "Hmmm... if we rewired this bit over here, and deleted that bit over there, and then brought in another bit from this other bug, then we might have something."  Molecular Legos, in other words.

As far as utility in the economy goes, the general method of engineering a biosynthesis pathway to produce fuels appears has, according to the press release from UCLA, been licensed to Gevo.  Gevo was founded by Francis Arnold, Matthew Peters, and Peter Meinhold of the California Institute of Technology and was originally funded by Vinod Khosla.

It is not clear how much of the new technology can be successfully claimed in a patent.  Dupont a published application from last spring (Update -- typed too fast)  Dupont had an application published last spring that claims bugs engineered to produce fuels via the Ehrlich pathway, and it appears to be very similar to what is in the Atsumi paper described above.  Here is the DuPont application at the USPTO, oddly entitled "Fermentive production of four carbon alcohols".  The "four-carbon" bit might be the out for the UCLA team and Gevo, as they demonstrate ways to build molecules with four and more carbons.  Time, and litigation, will tell who has the better claims.  And then both groups probably have to worry about patents held by Amyris, which is probably also claiming the use of engineered metabolic synthesis for biofuels.  Ah, the joys of highly competitive capitalism.  But, really, it is all good news because all the parties above are trying to move rapidly beyond ethanol.

I am no fan of ethanol as a biofuel, as it has substantially lower energy density than gasoline and soaks up water even better than a sponge.  If ethanol were the only biofuel around, then I suppose we would have to settle for it despite the disadvantages.  But, obviously, new technologies are rapidly being demonstrated that produce other, better, biofuels.  The Atsumi paper serves as yet more evidence that biological technologies will prove a substantial resource in weaning ourselves from fossil fuels (see  my earlier posts "The Need for Fuels Produced Using Synthetic Biology" and "The Intersection of Biofuels and Synthetic Biology").

New method for "bottom-up genome assembly"

Itaya, et al., have published a new method for assembling ~5kB DNA fragments into genome-sized pieces in this month's Nature Methods (PubMed).  Jason Kelly has launched a blog, Free Genes, where he describes the new method.  Welcome to the blogosphere, Jason.

I won't add anything to Jason's post, other than to note that because Itaya's method exploits a recombination mechanism present in a microbe, there is no need to manipulate large pieces of DNA "by hand".  This is a significant advantage over methods that require lots of pipetting between PCR steps, which exposes the growing DNA to fluid shear.  The reliance upon natural mechanisms for assembly might mean the method is better suited to the garage than something that uses fluid transfer.

Finally, building ~5kB segments doesn't appear to be such a big deal at this point.  While Itaya's method isn't completely general, and as described may be a bit slow, it should be widely useful to anyone who has an in-house method for making gene-sized pieces of DNA and who doesn't want to pay a foundry to assembly even larger pieces.

(Update: Oops.  I forgot to add that this sort of thing is just what I suggested in my previous post, when I observed that while Venter may have made excellent progress in building an artificial chromosome he certainly doesn't have a lock on building new organisms.)

Updated "Longest Synthetic DNA" Plot

Carlson_longest_sdna_nov_07With the reported completion of a 580 kB piece of DNA by Venter and colleagues, it is time to update another metric of progress in biological technologies.  Assuming the report is true, it provides evidence that the technological ability to assemble large pieces of DNA from the short oligonucleotides produced by DNA synthesizers is keeping up with the productivity enhancements enabled by those synthesizers (see my prior post "Updated, Um, Carlson Curve for DNA Synthesis Productivity").  That said, this is an accomplishment of art and science, not of commerce and engineering.  The methods are esoteric and neither widespread nor sufficiently low cost to become widespread.

The news report itself is a couple of months old now.  It yet to be confirmed by scientific publication of results, so I am breaking my habit of waiting until I can see the details of the paper before including another point on the plot.  Perhaps I just need something to do as a break from writing my book.

In any event, in the 6 October, 2007 edition of The Guardian, Ed Pilkington reported, "I am creating artificial life, declares US gene pioneer":

The Guardian can reveal that a team of 20 top scientists assembled by Mr Venter, led by the Nobel laureate Hamilton Smith, has already constructed a synthetic chromosome, a feat of virtuoso bio-engineering never previously achieved. Using lab-made chemicals, they have painstakingly stitched together a chromosome that is 381 genes long and contains 580,000 base pairs of genetic code.

It does not appear, from Mr. Pilkington's story, that Venter et al have yet inserted this mammoth piece of DNA into a cell.  Though Craig Venter is supposedly "100% confident" they can accomplish this, and as a result will boot up a wholly artificial genome running a semi-artificial organism; "The new life form will depend for its ability to replicate itself and metabolise on the molecular machinery of the cell into which it has been injected, and in that sense it will not be a wholly synthetic life form."

The Guardian story includes a comment from the dependably well-spoken Pat Mooney, director of the ETC Group.  Says Mooney,  "Governments, and society in general, is way behind the ball. This is a wake-up call - what does it mean to create new life forms in a test-tube?"

Here is an open letter to Mr. Mooney:

Dear Pat,

It doesn't mean a damn thing.  Except that it helps you raise more money by scaring more people unnecessarily, so that you can go on to scare yet more people.  Have fun with that.

Best Regards,

Rob Carlson

PS Great business model. 

I just can't get really excited about 580 kB of synthetic DNA.  First, while interesting technically, the result is entirely expected.  People keep saying to me that it is really hard to manipulate large pieces of DNA in the lab, and to this I say many things we do are really hard.  Besides, nature has been manipulating large pieces of DNA very successfully for a while now.  Say, three billion years, give or take.  It was inevitable we would learn how to do it. 

Second, I know of a few individuals who are concerned that, because there is insufficient funding for this sort of work, Venter and his crew will now have some sort of lock on the IP for building new organisms.  But it is so very early in this technological game that putting money on the first demonstrated methodology is just silly.  Someone else, probably many different someones, will soon demonstrate alternatives.  Besides, how many times are we going to need to assemble 580,000 bases and 381 genes from scratch?  The capability isn't really that useful, and I don't see that it will become useful anytime soon.

The more interesting numbers are, say, 10-50 genes and 10,00-50,000 bases.  This is the size of a genetic program or circuit that will have interesting economic value for many decades to come.  But while assembling synthetic constructs (plasmids) this size is still not trivial, it is definitely old news.  The question is how will the cost for constructs of this size fall, and when can I have that DNA in days or hours instead of weeks?  And how soon before I can have a desktop box that prints synthetic DNA of this length?  As I have previously noted in this space, there is clear demand for this sort of box, which means that it will happen sooner or later.  Probably sooner. 

Third, the philosophical implications of constructing an artificial genome are overblown, in my humble opinion.   It is interesting to see that it works, to be sure.  But the notion that this demonstrates a blow against vitalism, or against other religious conceptions of life is, for me, just overexcitement.  Venter and crew have managed to chemically synthesize a long polymer, a polymer biologically indistinguishable from naturally occurring DNA; so what?  If that polymer runs a cell the same way natural DNA does, as we already knew that it would, so what?  Over the last several millennia religious doctrine has shown itself to be an extremely flexible meme, accommodating dramatic changes in human understanding of natural phenomena.  The earth is flat!  Oh, wait, no problem.  The earth is at the center of the universe!  No?  Okay, we can deal with that.  Evolution is just another Theory!  Bacteria evolve to escape antibiotics?  Okay, God's will.  No problem. I can't imagine it will be any different this time around.

Finally, it is worth asking what, if any, implications there are for the regulatory environment.  The Guardian suggests, "Mr Venter believes designer genomes have enormous positive potential if properly regulated."  This is interesting, especially given Venter's comments last winter at the initial public discussion of "Synthetic Genomics: Options for Governance".  I don't know if his comments are on record anywhere, or whether my own public comments are for that matter, but Venter basically said "Good luck with regulation," and "Fear is no basis for public policy."  In this context, I think it is interesting that Venter is not among the authors of the report.

I just finished writing my own response to "Options for Governance" for my book.  I can't say I am enthusiastic about the authors' conclusions.  The  authors purport to only present "options".  But because they examine only additional regulation, and do not examine the the policy or economic implications of maintaining the status quo, they in effect recommend regulation.  One of the authors responded to my concerns of the implicit recommendation of regulation with, "This was an oversight."  Pretty damn big oversight.

Today's news provides yet another example of the futility of regulating technologies to putatively improve security.  Despite all the economic sanctions against Iran, despite export restrictions on computer hardware, scientists and engineers in Iran report that they have constructed a modest supercomputer using electronic components sold by AMD.  Here is the story at ITNews (originally via Slashdot).  Okay, so the Iranians only have the ability to run relatively simple weather forecasting software, and it may (may!) be true that export restrictions have kept them from assembling more sophisticated, faster supercomputers. (I have to ask at this point, why would they bother?  They are rolling in dollars.  Why not just pay somebody who has a faster machine to do the weather forecasting for you?  It suggests to me that they have pulled the curtain not from their best machine, but rather from one used to be used for weapons design and is now gathering dust because they have already built a faster one.)  Extending this security model to biological technologies will be even less successful.

Export restrictions for biological components are already completely full of holes, as anyone who has applied for an account at a company selling reagents will know.  Step 1: Get a business license.  Step 2: Apply for account.  Step 3: Receive reagents in mail.  (If you are in a hurry, skip Step 1; there is always someone who doesn't bother to ask for it anyway.)  This particular security measure is just laughable, and all the more so because any attempt to really enforce the legal restrictions on reselling or shipping reagents would involve intrusive and insanely expensive physical measures that would also completely crimp legitimate domestic sales.  I can only imagine that the Iranians exploited a similar loophole to get their AMD processors, and whatever other hardware they needed.

Well, enough of that.  I have one more chapter to write before I send the book off to reviewers.  Best get to it.

Updated, um, Carlson Curve for DNA Synthesis Productivity

Carlson_dna_productivity_nov_07_4

It seems that productivity improvements in DNA synthesis have resumed their previous pace.  As I noted in Bio-era's Genome Synthesis and Design Futures, starting in about 2002 there was a pause in productivity improvements enabled by commercially available instruments.

According to the specs and the company reps I met at iGEM 2007, a single Febit "Geniom" synthesizer can crank out about 500,000 bases a day and requires about 30 minutes of labor per run.  It looked to me like the number should be closer to 250KB per instrument per day, so I made an executive decision and allowed that the 16 synthesizers one person could run in a day could produce 2.5 megabases of single-stranded ~40-mers per day.  This in part because there is some question about the quality of the sequences produced by the particular chemistry used in the instrument.  It was asserted by the company reps that the Geniom instruments are being adopted by major gene synthesis companies as their primary source of oligos.  Note that running all those instruments would cost you up front just under US$ 5 million, without volume discounts, for 16 of the $300,000 instruments (plus some amount for infrastructure).

The quality of the DNA becomes particularly important if you are using the single-stranded oligos produced by the synthesizer to assemble a gene length construct.  To reiterate the point, the 2.5 megabases per day consists of short, single-stranded pieces.  The cost -- labor, time, and monetary -- of assembling genes is another matter entirely.  These costs are not really possible to estimate based on publicly available information, as this sort of thing is treated as secret by firms in the synthesis business.  Given that finished genes cost about 10 times as much as oligos, and that synthesis firms are probably making a decent margin on their product, the assembly process might run 5 to 8 times the cost of the oligos, but that is totally a guess.  (Here is a link to a ZIP file containing some of the graphics from the Bio-era report, including cost curves for gene and oligo synthesis.)

One final note: the Febit reps suggested they are selling instruments in part based on IP concerns of customers.  That is, a number of their customers are sufficiently concerned about releasing designs for expression chips and oligo sets -- even to contract manufacturers under confidentiality agreements -- that they are forking over $300,000 per instrument to maintain their IP security.  This is something I predicted in Genome Synthesis and Design Futures, though frankly I am surprised it is already happening.  Now we just have to wait for the first gene synthesis machine to show up on the market.  That will really change things. 

How big is the Bio-economy?

The words "biotechnology" and "biotech" are often used by the press and industry observers in limited and inconsistent ways.  Those words may be used to describe only pharmaceutical products, or in another context only the industry surrounding genetically modified plants, while in yet another context a combination of biofuels, plastics, chemicals, and plant extracts.  The total economic value of biotechnology companies is therefore difficult to assess, and it is challenging to disentangle the component of revenue due each to public and private firms.

I've managed to get a rough idea of where the money is for industrial biotech, agbiotech, and biopharmeceuticals.  Based on surveys from Nature Biotechnology, the U.S. Government, various organizations in Europe, and several private consulting firms, it appears estimates of total revenues range from US$ 80 to 150 billion annually, where the specific dollar value depends strongly on which set of products are included.  The various surveys that provide this information differ not only in their classification of companies, but also in methodology, which in the case of data summarized by private consulting firms is not always available for scrutiny.  For whatever reason, these firms tend to produce the highest estimates of total revenues.  Further complicating the situation is that results from private biotech companies are self-reported and there are no publicly available documents that can be used for independent verification.  One estimate from Nature Biotechnology, based on data from 2004 (explicitly excluding agricultural, industrial, and environmental biotech firms), suggested approximately 85% of all biotech companies are private, accounting for a bit less than 50% of employment in the sector  and 27% of revenues.

A rough summary follows:  As of 2006, biotech drugs accounted for about US$ 65 billion in sales worldwide, with about 85% of that in the U.S.  Genetically modified crops accounted for another US$ 6 billion, with industrial applications (including fuels, chemicals, materials, reagents, and services) contributing US$ 50-80 billion, depending on who is counting and how.  Annual growth rates over the last decade appear to be 15-20% for medical and industrial applications, and 10% for agricultural applications.

I am not going to go through all the details here at this time.  But the final amount is pretty interesting.  After sifting through many different sets of numbers, I estimate that revenues within the US are presently about US$125 billion, or approximately 1% of US GDP, and growing at a rate of 15-20% annually.

1% of GDP may not seem very large, but a few years ago it was only 0.5%.  At some point this torrid growth will have to slow down, but it isn't clear that this will be anytime soon.  Nor is it clear how large a fraction of GDP that biotech could ultimately be.  That is my next project.