Stewart Brand -- “Where are the green biotech hackers?”

Tomorrow's New York Times has a great article on Stewart Brand.  In it, he asks the question, “Where are the green biotech hackers?”  We're coming, Stewart.  It's just that we're still on the slow part of the curves.

It's an interesting question, actually -- when do we get to the fast part?  When does biology start to go really fast?  And what does fast mean?

One answer to the question is the speed and the cost at which we can presently sequence or synthesize an interesting genetic circuit or organism.  Costs for reading genes are halving every 18 months or so, and if the rumors are true, we will hit the Thousand Dollar Genome sooner than my original estimate.  Sequencing is pretty easy at this point, as long as you already have a map to work with, which is the case for an increasing number of organisms.  And if you build the organism yourself, or pay someone else to do it, then you already know both the basic structure of the genome (the map) and the specific sequence.

At the moment, synthesis of a long gene takes about four weeks at a commercial DNA foundry, with a bacterial genome still requiring many months at best, though the longest reported contiguous synthesis job to date is still less than 50 kilobases.  And at a buck a base, hacking any kind of interesting new circuit is still expensive.  As I reported from SB 2.0, the synthesis companies are evidently now using my cost estimates as planning devices, even though that's not why made those estimates in the first place.  They project costs to continue falling by a factor of 2 approximately every year, which means that it will be another 5 years before synthesizing something the size of E. coli from scratch will cost less than US$ 1000, or 1 kilobuck.

The bigger problem, though, is the turnaround time.  No engineer or hacker wants to wait four weeks to see if a program works.  Hit compile, wait for four weeks, no "Hello World."  Start trying to debug the bug, with no debugging tools.  No thanks.  (I've actually had discussions with geneticists/molecular biologists who think even waiting a few days for a synthesis job isn't a big deal.  But what can you say -- biology just hasn't been a hacker culture.  And we are the poorer for it.)

So, Mr. Brand, it will be a few years before green hackers, at least those who aren't supported by Vinod Khosla or Kleiner Perkins, really start to have an impact.  The hackers who are lucky enough to have that kind of support, such as the blokes at Amyris Biotechnologies if their past accomplishments are anything to go by, will probably have something to show for themselves pretty soon.

The article ends with a couple of great paragraphs, which, along with "Science is the only news", are all you need to live by:

“I get bored easily — on purpose,” he said, recalling advice from the co-discoverer of DNA’s double helix. “Jim Watson said he looks for young scientists with low thresholds of boredom, because otherwise you get researchers who just keep on gilding their own lilies. You have to keep on trying new things.”

That’s a good strategy, whether you’re trying to build a sustainable career or a sustainable civilization. Ultimately, there’s no safety in clinging to a romanticized past or trying to plan a risk-free future. You have to keep looking for better tools and learning from mistakes. You have to keep on hacking.

"Genome Synthesis and Design Futures: Implications for the U.S. Economy"

After many, many months of work, Bio Economic Research Associates (Bio-era) today released "Genome Synthesis and Design Futures: Implications for the U.S. Economy".  Sponsored largely by Bio-era and the U.S. Department of Energy, with assistance from Dupont and the Berkeley Nanosciences and Nanoengineering Initiative, the report examines the present state of biological technologies, their applications to genome design, and potential impacts on the biomanufacturing of biofuels, vaccines, and chemicals.  The report also employs scenario planning to develop four initial scenarios exploring the effects of technological development and governmental policy.   Here is a link to the press release; over on the right side of the page are links to a short Podcast with myself and Jim Newcomb describing some of the findings.

It is a giant topic, and even at 180 pages we have really just barely scratched the surface.  The changes we've already witnessed will pale in comparison to what's coming down the pike.  The report deals mostly with science, technology, economics, markets, and policy, and only starts to explore the social and ethical aspects of forthcoming decisions.  Future work will refine the technological and economic analyses, will flesh out the security aspects of the ferment in biological technologies, and will delve into what all this means for our society.  In the preface, Jim Newcomb and Steve Aldrich note:

In presenting this analysis, we are mindful of the limitations of its scope. The arrival of new technologies for engineering biological systems for human purposes raises complex questions that lie at the intersection of many different disciplines. As the historian Arthur M. Schlesinger has written, “science and technology revolutionize our lives, but memory, tradition and myth frame our response.” Because this report is focused on potential economic implications of genome engineering and design technologies for the U.S. economy, there are many important questions that are not addressed here. In particular, we have not attempted to address questions of safety and biosecurity; the likelihood or possible impact of unintended consequences, such as environmental damage from the use of these technologies; or the ethical, legal, and social questions that arise. The need for thoughtful answers to these and related questions is urgent, but beyond the scope of this work. We hope to have the opportunity to investigate these questions in subsequent research.

We had a lot of help along the way, and for my part I would like to thank Drew Endy, Brian Arthur, George Church, Tom Kalil, Craig Venter, Gerald Epstein, Jay Keasling, Brad Smith, Erdogan Gulari, John Beadle, Roger Brent, John Mulligan, Michele Garfinkel, Ralph Baric, and Stephen Johnston, and Todd Harrington. 

Here is web page to buy a hard copy and/or download the PDF.  Just fill out the form (we're trying to track interest), and you will be sent a link to the PDF.

On Indonesia and Distribution of H5N1 Strains

News in the last couple of days that Indonesia has decided not to forward homegrown strains of H5N1 to the WHO and instead is dealing directly with Baxter Healthcare for production of vaccines.  The worst bit of this, of course, is that there does not appear to be much cross reactivity elicited by the Vietnamese and Indonesian isolates, where the international reference vaccine is derived from a strain isolated in Vietnam.  Moreover, while Baxter is supposedly making progress in producing influenza vaccines in cell culture (Baxter's Press Releases, CIDRAP's version), this technology is not yet approved for human use; only research contracts, rather than production contracts, have been let by the U.S. Government for cell culture production.  Finally, despite much noise that cell culture is faster/better/cheaper than eggs for producing vaccines, it appears cell culture only beats eggs by a month or two.  (Baxter does have a very comendably decent Influenza information web page, which is here.) 

Here are a few paragraphs from an AP story, "Experts say Indonesian deal on H5N1 virus jeopardizes race for pandemic vaccine", via the IHT:

Indonesia Wednesday signed a memorandum of understanding with U.S. drug manufacturer Baxter Healthcare Corp. to develop a human bird flu vaccine.

Under the agreement, Indonesia will provide H5N1 virus samples in exchange for Baxter's expertise in vaccine production. Other organizations would have access to Indonesian samples provided they agree not to use the viruses for "commercial" purposes, said Siti Fadilah Supari, Indonesia's health minister.

But that is a major departure from the World Health Organization's existing virus-sharing system, where bird flu viruses are freely shared with the global community for public health purposes, including vaccine and antiviral development. Indonesia has not shared any viruses since the beginning of 2007.

Indonesia defended its decision, arguing the system works against poor countries. "The specimens we send to WHO...are then used by vaccine makers who then sell to us (at a profit)," Supari told reporters Wednesday. "This is unfair, we have the virus, we are getting sick, and then they take the virus from WHO — 'with WHO's permission' they say — and make it themselves," said Supari.

There seems to be a bit of confusion among reporters about whether Indonesia now has an official policy of withholding samples from the WHO, but Baxter is making it clear they don't have anything to do with the decision.  From The New York Times' coverage: "A Baxter spokeswoman said the company had not asked Indonesia to stop cooperating with the W.H.O. She added that the agreement under negotiation would not give it exclusive access to Indonesian strains."

In any event, Indonesians feel bent out of shape that they have previously provided strains to the international community, only to be charged for the vaccine when it becomes available.  News reports portray this as something of an IP spat, akin to controversy over biomining.  From the Reuters coverage:

"The specimens we sent to the WHO have been forwarded to their collaborating center. There it has been used for various reasons such as vaccine development ... or research," Supari said.

"Later they sold the discovery to us. This is not fair. We are the ones who got sick. They took the sample through WHO and with WHO consent and they tried to produce it for their own use," she said at a news conference after the signing of the pact with Baxter.

Supari said Australia was producing a human bird flu vaccine using the Indonesian virus strain, but did not give details.

"I was shocked because I never gave permits to Australia to produce a vaccine using our strain," she said.

"We have been working with Baxter since the beginning and are processing intellectual property rights with them. Baxter protects our intellectual property rights," she said.

...Under the memorandum of understanding, Indonesia would have the right to produce and market the bird flu vaccine domestically. It is negotiating to export it to a number of countries.

Production would be carried out by makers appointed by the Health Ministry.

So, in conclusion, the deal appears to put Indonesian isolates of H5N1 out of the reach of governments and firms with other vaccine technologies, at least for the time being.  Finally, in an interesting twist on the distribution of biological technologies, the deal also appears to put Indonesia in a position to become a leader in cell culture production of vaccines, potentially jumping to the head of the pack in the international vaccine market.

H5N1 is back in the U.K.

The headlines are today loudly announcing the return of H5N1 to the United Kingdom (CNN, New York Times) at a Turkey farm near Lowestoft.  Though nobody can say for sure, the virus probably arrived via migrating birds.  It appears that the likelihood of transmission by migrating bird or smuggled poultry has a geopolitical dependence.

Last month, Kilpatrick, et al., published a paper in PNAS ("Predicting the global spread of H5N1 influenza") that looked at a variety of factors to classify historical outbreaks and predict new ones.  The abstract does a decent job of summarizing the paper, so here it is:

The spread of highly pathogenic H5N1 avian influenza into Asia, Europe, and Africa has resulted in enormous impacts on the poultry industry and presents an important threat to human health. The pathways by which the virus has and will spread between countries have been debated extensively, but have yet to be analyzed comprehensively and quantitatively. We integrated data on phylogenetic relationships of virus isolates, migratory bird movements, and trade in poultry and wild birds to determine the pathway for 52 individual introduction events into countries and predict future spread. We show that 9 of 21 of H5N1 introductions to countries in Asia were most likely through poultry, and 3 of 21 were most likely through migrating birds. In contrast, spread to most (20/23) countries in Europe was most likely through migratory birds. Spread in Africa was likely partly by poultry (2/8 introductions) and partly by migrating birds (3/8). Our analyses predict that H5N1 is more likely to be introduced into the Western Hemisphere through infected poultry and into the mainland United States by subsequent movement of migrating birds from neighboring countries, rather than from eastern Siberia. These results highlight the potential synergism between trade and wild animal movement in the emergence and pandemic spread of pathogens and demonstrate the value of predictive models for disease control.

Of course, the only way to know if the model really works is, alas, to wait for more outbreaks.  Anyway, it seems the U.S. is safe from poultry smuggling, which we have a chance of intercepting, but susceptible to migrating birds, a pathway that almost certainly resists any defensive measures.

 

H5N1 Influenza and Countermeasures Update

There is an excellent news piece in last week's Science, where here the definition of excellent is both "informative" and "highly unsettling".  Dennis Normile and Martin Enserink write:

An upsurge in H5N1 bird flu outbreaks in poultry across Asia is driving home the message that even countries that have eliminated the virus once shouldn't become complacent. The continuing high death toll in humans, including two recently detected cases of infection with a Tamiflu-resistant strain in Egypt, is also a grim reminder of how devastating the virus might be if it acquires the ability to spread easily among humans.

...Over the past 3 weeks, Thailand and Vietnam reported their first H5N1 outbreaks among poultry in 6 months. Japan, which seemed to have dodged the bullet since its cluster of outbreaks in 2004, confirmed that the virus hit one farm on 11 January and probably a second farm on the 23rd. South Korea, which last November suffered its first outbreak since containing the virus in 2004, reported that the virus had turned up on a fifth poultry farm. Several wild birds found dead in Hong Kong tested positive for H5N1. And Indonesia on 20 January reported its fifth human death from the virus in just 10 days, bringing its death toll to 62, by far the most of any country.

The increase in outbreaks in the Northern Hemisphere follows what has become an established pattern. The reason for the seasonality is still not well understood, says Les Sims, a veterinarian based in Manunda, Australia, who advises the U.N.'s Food and Agriculture Organization (FAO). It is likely to be some complex interaction among several factors, including cooler temperatures enabling the virus to survive longer in the environment, greater poultry trade in preparation for winter festivals, and movements of wild birds.

The recurrence of the virus in South Korea and Japan is particularly notable. In both the winter of 2003-'04 and this year, outbreaks in South Korea were followed 4 to 6 weeks later by outbreaks in Japan. "The outbreaks in Japan and South Korea suggest to me free-flying birds as the most likely origin," says Sims. Both countries are trying to determine how the virus was reintroduced.

So it seems unlikely we will be rid of the virus through culling programs, the primary mechanism thus far employed for biosecurity.  That the virus seems to be spread by wild birds in these cases is interesting, but this isn't the only pathway for reintroduction into poultry or people.

Last week's issue of New Scientist revisits the notion that "Deadly H5N1 may be brewing in cats".  (Most of the relevant text is available here at ProMed.)  Felines may be serving as a mammalian host that might enable the virus to adapt to mammalian biology and thereby become more dangerous to humans.  This is something I started wondering about after cats started dying in Europe so soon after the virus arrived there last year.  The New Scientist provides corroborating evidence that cats are important in the dynamics of the virus in Indonesia.  The story reports some surprise on the part of scientists doing the field work with regard to the prevalence of the virus in cats in Indonesia:

In the first survey of its kind, an Indonesian scientist has found that in areas where there have been outbreaks of H5N1 in poultry and humans, 1 in 5 cats have been infected with the virus, and survived. This suggests that as outbreaks continue to flare across Asia and Africa, H5N1 will have vastly more opportunities to adapt to mammals than had been supposed.

Chairul Anwar Nidom of Airlangga University in Surabaya, Indonesia, told journalists last week that he had taken blood samples from 500 stray cats near poultry markets in four areas of Java, including the capital, Jakarta, and one area in Sumatra, all of which have recently had outbreaks of H5N1 in poultry and people.

Of these cats, 20 per cent carried antibodies to H5N1. This does not mean that they were still carrying the virus, only that they had been infected - probably through eating birds that had H5N1. Many other cats that were infected are likely to have died from the resulting illness, so many more than 20 per cent of the original cat populations may have acquired H5N1.

This is a much higher rate of infection than has been found in surveys of apparently healthy birds in Asia. "I am quite taken aback by the results," says Nidom, who also found the virus in Indonesian pigs in 2005. He plans further tests of the samples at the University of Tokyo in February.

The data explicitly contradicts prior statements from the WHO downplaying the role of cats in harboring or spreading the virus, which I wrote about here.  I continue to be fascinated by the extent to which the behavior of the virus in the wild contradicts the expectations and public statements of "officials" in various organizations around the world.  H5N1 is clearly evolving in ways that are both surprising and worrying.

The New Scientist and Science stories both note that two people in Egypt who recently died from H5N1 infections were carrying strains of the virus evidently resistant to  Tamiflu.  It is unclear whether the virus carried the relevant mutations before it infected these patients, or whether it evolved during their illness because they were treated with Tamiflu in the hospital.  Either way, it seems that many people infected with H5N1 are diagnosed after the window in which antivirals are most effective, in part because diagnosis is both difficult and slow.  This phenomenon is described in two articles and a commentary in the 26 November, 2006, issue of The New England Journal of Medicine that report disturbing analyses of human H5N1 outbreaks in Indonesia and Turkey last year.

In a New York Times article about the NEJM papers, Donald Mcneil, reports the following:

Rapid tests on nose and throat swabs failed every time, and in Turkey, so did all follow-up tests known as Elisas. The only tests that consistently worked were polymerase chain reaction tests, or PCRs, which can be done only in advanced laboratories and take several hours.

''It'll be a disaster if we have to use PCRs for everybody,'' said Dr. Anne Moscona, a professor of pediatrics and immunology at Weill Cornell Medical College. ''It just isn't available at a whole lot of places.''

If the A(H5N1) flu mutates into a pandemic strain, rapid tests ''will be really key,'' she said.

What the NYT didn't report is that the patients were infected on average 5 days prior to the appearance of symptoms, outside the window recognized for effective use of antiviral drugs.  Robert Webster and Elena Govorkova have an excellent Perspective piece accompanying the NEJM articles, and they note that in the Indonesian cases, "...Treatment [with oseltamivir] began 5 to 7 days after initial infection.  Such delayed administration of the drug limits its value in decreasing the viral load and might lead to the selection of resistant strains."  It isn't clear from the paper describing the Turkey outbreak when oseltamivir was administered, but those patients did not experience symptoms for an average of 5 days after exposure to the virus, and then received antibiotics for the first 3-7 days of hospitalization before transfer to a unit that treated them for influenza.  In summary, it appears the virus is often being exposed to oseltamivir after the period when the drug is expected to be effective, enhancing the probability of selecting for resistant mutants.

Finally, in a slight change of direction, in the 21 December issue of Nature, John Oxford has a review of a new book on influenza, "Bird Flu: A Virus of Our Own Hatching", by Michael Greger.  You may recall that Oxford is primarily responsible for the hypothesis that the 1918 flu emerged at a British army camp near Etaples, a tale I wrote about a couple of years ago (The Spanish Flu Story).  Oxford notes that:

I am often kicked around by American authors in books about influenza. How dare a Limey suggest that the Spanish influenza A H1N1 virus arose in a gas-infected, pig-ridden and bird-infected army camp of 100,000 people in France in 1916, when the whole world knows it started in Dorothy's home state, Kansas? But I felt less bruised than usual. Perhaps I am getting used to it.

I still find Oxford's version of the origin of the Spanish Flu to be the most compelling, in part because it describes a situation of close contact between animals and people, where those animals were killed and prepared as food by soldiers on a daily basis in conditions not so dissimilar to those in many developing countries where H5N1 is present today.

A Few Thoughts on Rapid Genome Sequencing and The Archon Prize

The December, 2006 issue of The Scientist has an interesting article on new sequencing technologies.  "The Human Genome Project +5", by Victor McElheny, contains a few choice quotes.  Phil Sharp, from MIT, says he, "would bet on it without a questionthat we will be at a $1,000 genome in a five-year window."  Presently we are at about US$10 million per genome, so we have a ways to go. It's interesting to see just how much technology has to change before we get there. 

The Archon X-Prize for Genomics specifies sequencing 100 duplex genomes in 10 days, at a cost of no more than US$10,000 per genome.  In other words, that is roughly 600 billion bases at a cost of microdollars per base.  Looking at it yet another way, winning requires 6000 person-days at present productivity numbers for commercially available instruments, whereas 10 days only provides 30 person-days of round-the-clock productivity.

I tried to find a breakdown of genome sequencing costs on the web, and all I could come up with is an estimate for the maize genome published in 2001.  I'll use that as a cost model for state of the art sequencing of eukaryotes (using Sanger sequencing on capillary based instruments).  Bennetzen, et al., recount the "National Science Foundation-Sponsored Workshop Report: Maize Genome Sequencing Project" in the journal Plant Physiology, and report:

The participants concurred that the goal of sequencing all of the genes in the maize genome and placing these on the integrated physical and genetic map could be pursued by a combination of technologies that would cost about $52 million. The breakdown of estimated costs would be:

  • Library construction and evaluation, $3 million
  • BAC-end sequencing, $4 million
  • 10-Fold redundant sequencing of the gene-rich and low-copy-number regions, $34 million
  • Locating all of the genes on an integrated physical-genetic map, $8 million
  • Establishing a comprehensive database system, $3 million.

From the text, it seems that decreases in costs are built into the estimate.  If we chuck out the database system, since this is already built for humans and other species, we are down to direct costs of something like $49 million for approximately 2.5 megabases(MB).  The Archon prize doesn't specify whether competitors can use existing chromosomal maps to assemble sequence data, so presumably all the information is fair game.  That lets us toss out another $8 million in cost.  The 10-fold redundant sequencing is probably overkill at this point, but I will keep all those costs because the Archon prize requires an error rate of no more than 1 in 100,000 bases; you have to beat down the error regardless of the sequencing method.  Rounding down to $40 million for charity's sake, it looks like the labor and processing associated with producing the short overlapping sequences necessary for Sanger sequencing account for about 17.5 percent of the total.  These costs are probably fixed for approaches that employ shotgun sequencing.

Again using the Archon prize as a simple comparison, that's US$1.75 million just to spend on labor for getting ready to do the actual sequencing.  In 1998, the FTE (full time equivalent) for sequencing labor was US$135,000.  If you assume the dominant cost for preparing the library and verifying the BACs is labor, you can hire about 15 people.  This looks like a lot of work for 15 people, and, given the amount of time required to do all the cloning and wait for bacteria to grow, not something they can accomplish even within the 10 days alloted for the whole project.

The other 82.5 percent of the $10 million you can spend on the actual sequencing.  The prize guidelines say you don't have to include the price of the instruments in the cost, but just for the sake of argument I'll do that here.  And I'll mix and match the cost estimates from the maize project for Sanger sequencing with other technologies.  The most promising commercial instrument appears to be the 454 pyrosequencer, at $500,000 a pop, looking at its combination of read length and throughput, even if they don't yet have the read length quite high enough yet.  If you buy 16 of those beasties, it appears you can sequence about 1.6 GB a day, about a factor of 40 below what's required to win the Archon prize.  Let's say 454 gets the read length up to 500 bases, then they are still an order of magnitude shy just on the sequencing rate, forgetting the sample prep.

Alternatively, you could simply buy 600 of the 454 instruments, and then you'd be set, at least for throughput.  Might blow your budget, though, with the $300 million retail cost.  But you could take solace in how happy you'd make all the investors in 454.

Will anyone be around for a "Cosmological Eschatology"?

Over at Open the Future, Jamais Cascio has compiled a list of 10 Must-Know Concepts for the 21st Century, partially in response to a similar list compiled by George Dvorsky. I'm flattered that Jamais includes "Carlson Curves" on his list, and I'll give one last "harrumph" over the name and then be silent on that point.

Jamais's list is good, and well worth perusing. George Dvorsky's list is interesting, too, and meandering through it got me restarted on a topic I have left fallow for a while, the probability of intelligent life in the universe. More on that in a bit.

I got headed down that road because I had to figure out what the phrase "cosmological eschatology" is supposed to mean. It doesn't return a great number of hits on Google, but high up in the list is an RSS feed from Dvorsky that points to one of his posts with the title "Our non-arbitrary universe". He defines cosmological eschatology through quoting James Gardner:

The ongoing process of biological and technological evolution is sufficiently robust and unbounded that, in the far distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the cosmos.

That is, you take some standard eschatology and add to it a great deal of optimistic technical development, probably including The Singularity. The notion that sentient life could affect the physical course of the universe as a whole is both striking and optimistic. It requires the assumption that a technological species survives long enough to make it off the home planet permanently, or at least reach out into surrounding space to tinker with matter and information at very deep levels, all of which in turn requires both will and technical wherewithal that has yet to be demonstrated by any species, so far as we know.  And it is by no means obvious that humans, or our descendants, will be around long enough to see such wonders in any case; we don't know how long to expect the human species to last. From the fossil record, the mean species lifetime of terrestrial primates appears to be about 2.5 million years (Tavare, et al, Nature, 2002). This is somewhat less than the expected age of the universe. Even if humans live up to the hype of The Singularity, and in 50 years we all wind up with heavy biological modifications and/or downloaded consciences that provide an escape from the actuarial tables, there is no reason to think any vestige of us or our technological progeny will be around to cause any eschatological effects on the cosmos.

Unless, of course, you think the properties of the universe are tuned to allow for intelligent life, possibly even specifically for human life. Perhaps the universe is here for us to grow up in and, eventually, modify.  This "non-arbitrary universe" is another important thread in the notion of cosmological eschatology.  Dvorsky quotes Freeman Dyson to suggest that there is more to human existence than simple chance:

The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. There are some striking examples in the laws of nuclear physics of numerical accidents that seem to conspire to make the universe habitable.

I read this with some surprise, I have to admit. I don't know exactly what Dyson meant by, "The universe in some sense must have known we were coming." I'm tempted to think that the eminent professor was "in some sense" speaking metaphorically, with a literary sweep of quill rather than a literal sweep of chalk. 

Reading the quotation makes me think back to a conversation I had with Dyson while strolling through Pasadena one evening a few years ago. My car refused to start after dinner, which left us walking a couple of miles back to the Caltech campus. While we navigated the streets by starlight, we explored ideas on the way. Our conversation that evening meandered through a wide range of topics, and at that point we had got onto the likelihood that the Search for Extraterrestrial Intelligence (SETI) would turn up anything. Somewhere between sushi in Oldtown and the Albert Einstein room at the faculty club, Dyson said something that stopped me in my tracks

Which brings me, in a somewhat roundabout way, to my original interest: where else might life arise to be around for any cosmological eschatology? It seems to me that, physics being what it is, and biochemistry being what it is, life should be fairly common in the universe. Alas, the data thus far does not support that conclusion. The standard line in physics is that at large length scales the universe is the same everywhere, and that the same physics is in operation here on Earth as everywhere else, which goes by the name of the Cosmological Principle. More specifically, the notion that we shouldn't treat our little corner of the universe as special is known as the Copernican Principle

So, why does it seem that life is so rare, possibly even unitary? In Enrico Fermi's words, "Where is everybody?"

At the heart of this discussion is the deep problem of how to decide between speculative theory and measurements that are not yet demonstrably – or even claimed to be – complete and thorough. Rough calculations, based in part on seemingly straightforward assumptions, suggest our galaxy should be teeming with life and that technological cultures should be relatively common. But, so far, this is not our experience. Searches for radio signals from deep space have come up empty.

One possibility for our apparent solitude is that spacefaring species, or at least electromagnetically noisy ones, may exist for only short periods of time, or at such a low density they don’t often overlap. Perhaps we happen to be the only such species present in the neighborhood right now. This argument is based on the notion that for events that occur with a low but constant probability, the cumulative odds for those events over time make them a virtual certainty. That is, if there is a low probability in any given window of time for a spacefaring race to emerge, then eventually it will happen. Another way to look at this is that the probability for such events not to happen may be near one, but that over time these probabilities multiply and the product of many such probabilities falls exponentially, which means that the probability of non-occurrence eventually approaches zero.

Even if you disagree with this argument and its assumptions, there is a simple way out, which Dyson introduced me to in just a couple of words.  “We could be first,” he said.

“But we can’t be first,” I responded immediately, without thinking.

“Why not?” asked Dyson. It was this seemingly innocuous question, based on a very reasonable interpretation of the theory, data, and state of our measurement capability, that I had not yet encountered and that provided me such important insight. My revelation that evening had much to do with the surprise that I had been lured into an obvious fallacy about the relationship between what little we can measure well and the conclusions we make based on the resulting data.

Despite looking at a great many star systems using both radio and laser receivers, the results from SETI are negative thus far. The question, “Where is everyone?”, is at the heart of the apparent conflict between estimates of the probability of life in the galaxy and our failure to find any evidence of it. Often now called the Fermi Paradox, a more complete statement is:

The size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this belief seems logically inconsistent with the lack of observational evidence to support it. Either the initial assumption is incorrect and technologically advanced intelligent life is much rarer than believed, current observations are incomplete and human beings have not detected other civilizations yet, or search methodologies are flawed and incorrect indicators are being sought.

A corollary of the Fermi Paradox is the Fermi Principal, which states that because we have not yet demonstrably met anyone else, given the apparent overwhelming odds that other intelligent life exists, we must therefore be alone. Quick calculations show that even with slow transportation, say .1 to .8 times the speed of light, a civilization could spread throughout the galaxy in a few hundred million years, a relatively short time scale compared to the age of even our own sun. Thus even the presence of one other spacefaring species out there should have resulted in some sort of signal or artifact being detected by humans. We should expect to overhear a radio transmission, catch sight of an object orbiting a planet or star, or be visited by an exploratory probe.

But while it may be true that even relatively slow interstellar travel could support a diaspora from any given civilization, resulting in outposts derived from an original species, culture, and ecosystem, I find doubtful the notion that this expansion is equivalent to a functioning society, let alone an empire.  Additional technology is required to make a civilization, and an economy, work.

Empires require effective and timely means of communication. Even at the substantially sub-galactic length scales of Earthly empires, governments have always sought, and paid for, the fastest means of finding out what is happening at their far reaches and then sending instructions back the other way to enforce their will; Incan trail runners, fast sailing ships, dispatch riders, the telegraph, radio, and satellites were all sponsored by rulers of the day. Without the ability to take the temperature of far flung settlements – to measure their health and fealty, and most importantly to collect taxes – travel and communication at even light speed could not support the flow of information and influence over characteristic distances between solar systems. Unless individuals are exceptionally long-lived, many generations could pass between a query from the one government to the next, a reply, and any physical response. This is a common theme in science fiction; lose touch with your colonies, and they are likely to go their own way.

So if there are advanced civilizations, where are they? My own version of this particular small corner of the debate is, “Why would they bother to visit?  We’re boring.” A species with the ability to travel, and equally important to communicate, between the stars probably has access to vastly more resources than are present here on Earth. Those species participating in any far-reaching civilization would require faster-than-light technology to maintain ties between distant stars. Present theories of faster than light travel require so-called exotic matter, or negative energy. Not anti-matter, which exists all around us in small quantities and can be produced in the lab, but matter that has properties that can only be understood mathematically. For humans, exotic matter is presently neither in the realm of experiment nor of experiment’s inevitable descendant, technology. 

With all of the above deduction, based on exceptionally little data, we could conclude that we are alone, that we are effectively alone because there isn’t anyone else close enough to talk to, or that galactic civilizations use vastly more sophisticated technology than we have yet developed or imagined. Or, we could just be first. Even though the probabilities suggest we shouldn't be first, it still may be true.

But as you might guess, given our present technological capabilities, I tend toward an alternative conclusion; we could acknowledge our measurements are still very poor, our theory is not yet sufficiently descriptive of the universe, and neither support much in the way of speculation about life elsewhere.

Now I've gone on much too long. There will be more of this in my book, eventually.

Microsoft Supports Biobricks

Last weekend at the 2006 International Genetically Engineered Machines Competition (iGEM 2006), Microsoft announced a Request For Proposals related to Synthetic Biology.  According to the RFP page:

Microsoft invites proposals to identify and address computational challenges in two areas of synthetic biology. The first relates to the re-engineering of natural biological pathways to produce interoperable, composable, standard biological parts. Examples of research topics include, but are not limited to, the specification, simulation, construction, and dissemination of biological components or systems of interacting components. The second area for proposals focuses on tools and information repositories relating to the use of DNA in the fabrication of nanostructures and nanodevices. In both cases, proposals combining computational methods with biological experimentation are seen as particularly valuable.

The total amount to be awarded is $500,000. 

"Smallpox Law Needs Fix"

ScienceNOW Daily News is carrying a short piece on the recommendation by the National Science Advisory Board on Biosecurity (NSABB) to repeal a law that criminalizes synthesis of genomes 85% similar to smallpox.

The original law, which surprised everyone I have ever talked to about this topic, was passed in late 2004 and wasn't written about by the scientific press until March of '05:

The new provision, part of the Intelligence Reform and Terrorism Prevention Act that President George W. Bush signed into law on 17 December 2004, had gone unnoticed even by many bioweapons experts. "It's a fascinating development," says smallpox expert Jonathan Tucker of the Monterey Institute's Center for Nonproliferation Studies in Washington, D.C.

...Virologists zooming in on the bill's small print, meanwhile, cannot agree on what exactly it outlaws. The text defines variola as "a virus that can cause human smallpox or any derivative of the variola major virus that contains more than 85 percent of the gene sequence" of variola major or minor, the two types of smallpox virus. Many poxviruses, including a vaccine strain called vaccinia, have genomes more than 85% identical to variola major, notes Peter Jahrling, who worked with variola at the U.S. Army Medical Research Institute of Infectious Diseases in Fort Detrick, Maryland; an overzealous interpretation "would put a lot of poxvirologists in jail," he says.

According to the news report at ScienceNOW:

Stanford biologist David Relman, who heads NSABB's working group on synthetic genomics, told the board that "the language of the [amendment] allows for multiple interpretations of what is actually covered" and that the 85% sequence stipulation is "arbitrary." Therefore, he said, "we recommend repealing" the amendment.

Relman's group also recommended that the government revamp its select agents list in light of advances in synthetic genomics. These advances make it possible to engineer biological agents that are functionally lethal but genomically different from pathogens on the list. The group's recommendations, which were approved unanimously by the board, are among several that the board will pass on to the U.S. government to help develop policies for the conduct and oversight of biological research that could potentially be misused by terrorists.

DNA Vaccines Update and Avian Flu Tidbits

There has been serious progress recently in developing DNA vaccines for pandemic influenza.  First, Vical just announced (again by press release and conference presentation, rather than peer reviewed publication) single dose protection of mice and ferrets against a lethal challenge with H5N1 using a trivalent DNA vaccine.  Ferrets are seen by many as the best model for rapid testing of vaccines destined for use in humans.  According to the press release:

"We are excited by the recent advances in our pandemic flu vaccinedevelopment program," said Vijay B. Samant, President and Chief Executive Officer of Vical. "Earlier this week, we presented data from mouse studies demonstrating the dose-sparing ability of our Vaxfectin(TM) adjuvant when used with conventional flu vaccines. Today we presented data from ferret studies demonstrating the ability to provide complete protection with a single dose of our Vaxfectin(TM)-formulated avian flu DNA vaccine. Our goal is to advance into human testing with this program as quickly as possible, both to provide a potential defense against a pandemic outbreak and to explore the potential for a seasonal flu vaccine using a similar approach."

Mr. Samant will be attending the bio-era H5N1 Executive Round table in Cambridge in a few weeks, along with Dr. David Nabarro, the Senior UN System Coordinator for Avian and Human Influenza.  I'm looking forward to finally meeting these gentlemen in person.

Powdermed is in early human clinical trials for its annual and pandemic flu DNA vaccines in the U.K. and the U.S., and has recently been acquired by Pfizer.  This should provide needed cash for trials, technical development, and perhaps even for building a manufacturing facility for large scale production of their proprietary needle free injection system.  I think it is interesting that a large pharmaceutical company -- a specialty chemicals company, in essence -- has acquired technology that is essentially a chemical vaccine.  I wonder if Pfizer can lend expertise to packaging and DNA synthesis.

Despite progress in the lab and greater funding, there are still significant challenges in getting these vaccines into the clinic.  Here is the DNA Vaccine Development: Practical Regulatory Aspects slide presentation from the NAIAD.  Obviously, lots of work to do there.  And as I have written about previously, it doesn't appear that the FDA is really interested in allowing new technologies to fairly compete, even if they are the best option for rapid manufacture and deployment as countermeasures for pandemic flu.

In other DNA vaccine news, a recent paper in PNAS demonstrated, "Protective immunity to lethal challenge of the 1918 pandemic influenza virus by vaccination."  Kong, et al., showed that, "Immunization with plasmid expression vectors encoding hemagglutinin (HA) elicited potent CD4 and CD8 cellular responses as well as neutralizing antibodies."  Here is more coverage from Effect Measure, which notes that the paper is primarily interesting as a study of the mechanism of DNA immunization in mice against the 1918 virus.

However, if I understand the paper correctly, the authors developed a means to directly correlate the effect of  immunization with antibody production and thereby, "define [the vaccine's] mechanism of action".  This appears to be a significant step forward in understanding how DNA vaccines work.  I interviewed Vijay Samant of Vical by phone a few months ago, and he noted that because animal studies demonstrate complete protection even though traditional measures of immunity do not predict that result, he has a hunch that "tools for measuring immunogenicity for DNA will need to be different than for measuring protein immunogenicity."  Perhaps the results of Kong, et al., point the way to just such a new tool.

An upcoming Nature paper by Micheal Katze, just down the hill here in the UW Medical School, elucidates some of the mechanisms behind the extraordinary lethality of the 1918 virus in mice.  Writing in Nature, Kash, et al., show that:

...In a comprehensive analysis of the global host response induced by the 1918 influenza virus, that mice infected with the reconstructed 1918 influenza virus displayed an increased and accelerated activation of host immune response genes associated with severe pulmonary pathology.  We found that mice infected with a virus containing all eight genes from the pandemic virus showed marked activation of pro-inflammatory and cell-death pathways by 24 h after infection that remained unabated until death on day 5.

In other words, the immune response to infection with the 1918 virus contributed to mortality.  Moreover, "These results indicated a cooperative interaction between the 1918 influenza genes and show that study of the virulence of the 1918 influenza requires the use of the fully reconstructed virus."  That is, you have to be able to play with the entire reconstructed bug in order to figure out why it is so deadly.  And this result gives an interesting context to the recent paper of Maines, et al., demonstrating that reassortant viruses of the present H5N1 and lesser strains are not as fearsome as the complete H5N1 genome (which I wrote about a few weeks ago).  This latter observation has been interpreted in the press as evidence that H5N1 is "not set for pandemic", even though H5N1 is demonstrably changing in nature primarily by mutation rather than by swapping genes.  H5N1 is quite deadly, and it may simply be that the particular combination of evolving genes in H5N1 gives it that special something.

Finally, an upcoming paper in J. Virology demonstrates an entirely new antiviral strategy based on peptides that bind to HA proteins in vivo and thereby prevent viral binding to host cells.  "Inhibition of influenza virus infection by a novel antiviral peptide," by Jones, et al., at the University of Wisconsin, appears to still be in pre-press.

In the abstract the authors state:

A 20-amino acid peptide (EB) derived from the signal sequence of fibroblast growth factor-4 exhibits broad-spectrum antiviral activity against influenza viruses including the H5N1 subtype in vitro. The EB peptide was protective in vivo even when administered post-infection. Mechanistically, the EB peptide inhibits the attachment to the cellular receptor preventing infection. Further studies demonstrated that the EB peptide specifically binds to the viral hemagglutinin (HA) protein. This novel peptide has potential value as a reagent to study virus attachment and as a future therapeutic.

This is just an initial demonstration, but it is extremely interesting nonetheless.  However, because it is a protein based drug, it risks generating an immune response against the drug itself.  It will have to be administered in a way that preserves function in vivo in humans and doesn't spook the immune system.  The last thing you want to do is generate antibodies against a protein vital for human health.

Yet, precisely because it is a fragment of a human protein, it might mean there is a lower risk of generating that immune response, especially if it can be produced in a way that has all the right post-translational modifications (glycosylation, etc).  Though I wonder about variation in the population: various alleles and SNPs.  What if you are given a version of the peptide that differs in sequence from the one you are carrying around?  Would this generate an immune response against the drug even though it is closely related to something you carry naturally, and if so would those antibodies also pick out your allele?  Definitely the potential for bad juju there.  Another example of where personalized medicine, and having your genome sequence in your file, might be handy.  Alternatively, I suppose you could just use your own sequence for the peptide, and have the thing synthesized in vitro for use as a personalized drug.  Sequence --> DNA synthesis --> in vitro expression --> injection.  Hmmm...you could probably already stuff all that technology in a single box...

However it is used, this advance is probably a very long way from the clinic.  It might go faster if they use the peptide as inspiration for a non-protein drug, which, incidentally, the authors suggest near the end of the paper.  Definitely a high-tech solution, either way, but probably the wave of the future.