The Institute Para Limes

I spent part of last week at the "opening congress" of the Institute Para Limes (IPL) in The Netherlands.  The IPL is meant to be a European version of the Santa Fe Institute (SFI) for the new century, though because of it's cultural mileau it is also meant to be something different.  The meeting last week was supposed to help sort out the focus and style of the place. 

Wikipedia notes that;

SFI's original mission was to disseminate the notion of a separate interdisciplinary research area, complexity theory referred to at SFI as "complexity science". Recently it has announced that its original mission to develop and disseminate a general theory of complexity has been realized. It noted that numerous complexity institutes and departments have sprung up around the world.

SFI was founded by a bunch of famous people, a Nobel Laureate included, and has been much lauded in the press, though its reputation is not universally sterling in academic circles.  This is primarily because, I suspect, many people are still trying to figure out exactly what "Complexity Science" really is all about.  It's a fair question.  But there has been a great deal of good work done at the SFI.

The director of SFI, Geoff West, was the first speaker at the Institute Para Limes meeting, and his talk focussed both on how SFI has succeeded and also his own contributions in the areas of allometric scaling.  He also spoke about this really cool paper in PNAS that I printed out last spring, but have somehow managed to not yet read, "Growth, innovation, scaling, and the pace of life in cities". 

The IPL will eventually be sited in a renovated monastery in Duisberg, which is intellectually, by design, approximately in the middle of nowhere.  This part of the plan for IPL confuses me a bit.  It will take at least 90 minutes to get to IPL from Amsterdam, probably more if you have to change trains multiple times, like I did, and then find a taxi for the final leg.  There is something to be said for making sure you have some intellectual distance from staid Universities, but in my experience it a block or two is usually enough to serve as infinitely high barriers between academic departments.  At Princeton, for many years, it was an exceptionally rare sight for anyone to even cross the street between Jadwin (Physics) and Lewis Thomas (Molecular Biology) for the purposes of a scientific discussion.

The meeting was a chance for me to catch up with Sydney Brenner a bit, to stand by has he and Gerard t'Hooft got into an animated, um, communication, about the purpose of DNA, and to hear Sydney drop a few bon mots:

On "factory science" in biology: "Low input, high throughput, no output."

On evolution: "Mathematics is the art of the perfect.  Physics is the art of the optimal.  Biology is the art of the satisfactory.  Patch it up with sticky tape, tie it up with twine, and go on.  If it doesn't work, end of story, next genome."

Gerard t'Hooft had this nice bit about the process of science: "Science is about the truth.  Science zooms in on the truth.  The truth changes, in part due to changes in science, but the assumptions and conjectures are always periodically tested."

And Science Always Wins.

Dispelling a Climate Change Skeptic's "Deception"

(Updated: Friday 5 Oct 19:15 PST)

A few weeks ago I heard a presentation from someone (hereafter person "A", to remain anonymous) who claimed that increasing CO2 concentrations won't cause significant global warming.  The highly technical argument sounded extremely implausible to me, but it has taken me a while to sort out the details.  This is worth commenting on because the argument is due to be presented in a high profile book due out next year from a very well known publisher.

I don't fault person A for falling for the "deception", but he could have been more critical given the sources he used to build up his argument.

The anti-warming argument was based on a figure from a non-peer reviewed "paper" available on the web.  The figure, in turn, was generated by a fellow named David Archibald using the "modtran" model server hosted by The University of Chicago.  The modtran model server is run by Professor David Archer , in the Department of Geophysical Sciences, to help his students with coursework.  I wrote to Professor Archer to clarify both the intended use of the model and the interpretation of the data.

The model is evidently reasonably well accepted in its description of infrared radiation adsorption by the atmosphere as a function of CO2 concentration, otherwise known as radiative forcing.  But it turns out that to estimate the resulting warming, you have to multiply the radiative forcing by the 'climate sensitivity parameter', which tells you how the atmosphere and oceans respond to added heat.  The climate sensitivity parameter is actually a distribution of values, and models of climate change are usually evaluated using several different values of the parameter.  David Archibald conveniently chose a value that is 40 times smaller than the most likely value in the distribution used by the IPCC.  The value is in the distribution describing the climate sensitivity parameter, to be sure, but it is way the hell out to the left, and very improbable.  Thus one can very accurately claim that Archibald used the correct radiative forcing numbers but he intentionally chose an estimate of climate sensitivity that nobody else believes is physically likely.

Professor Acher posted to RealClimate.org with the title, "My model, used for deception".  He is relatively circumspect, though still damning, in his criticism of Archibald.  The comments that follow his post, however, are ruthless.  It seems I set loose the hounds.

I take the time to write this because I have become more aware of late that many climate change skeptics seem to think that anthropogenic climate change (in particular, warming caused by CO2 emissions) is simply a political ploy with no basis in physical reality.  That kind of thinking denies not just climate change, but virtually all of the science our technological economy is built on.  (I will certainly admit some of the rhetoric surrounding climate change bothers me, and I am not comfortable with the idea of brainwashing children to harass their parents about buying hybrid cars.  See the 29 September WSJ, "Inconvenient Youths", or even the recent The Daily Show segment on absurdly over the top children's books from wingnuts on both the left and the right.)

I could care less at this point about the political side of the argument, and why people do or don't like Al Gore.  Physics is physics.  Science always wins.  Science is self-correcting, and over the long term there ain't no politics about it.  The U.S. was founded based on the enlightenment notions of tolerance and rational decision making.  Alas, those words aren't in the Constitution anywhere, and they are seldom uttered inside the Beltway these days.  But if we don't base our policy decisions on science, then we can just forget the U.S. as a viable economic entity, and thus as an entity capable of being the standard bearer of ideals that make this country worth living in and defending.

It's Time to Invest in Water Wings

Waterfront property in Puget Sound and the San Juan Islands is advertised in units of "no-", "low-", "medium-", and "high-bank".  Whenever I dream about a place to watch the sunset from, and to launch my kayak, some sort of beach usually plays a staring role.  But James Hansen and his colleagues say anybody with no- to medium-bank waterfront property could be in trouble fairly soon.

In a paper just published in Philosophical Transactions of the Royal Society, Hansen, the director of NASA's Goddard Institute for Space Sciences, and five eminent colleagues bluntly suggest: "...Civilization developed, and constructed extensive infrastructure, during a period of unusual climate stability, the Holocene, now almost 12,000 years in duration. That period is about to end."

Although the paper carries the rather prosaic title, “Climate change and trace gases”, The Independent leads off its coverage with, "The Earth today stands in imminent peril".  It isn't so surprising that The Independent would start slinging end-of-the-world rhetoric around at the drop of a hat an iceberg,  but in this case (in this case!) I think they are actually getting the story right.

Relying primarily on data, rather than upon climate models as does the IPCC, Hansen, et al., draw very different conclusions about what is happening at the poles of the planet than the recent international consensus report.  As other coverage has noted, the Hansen paper looks closely at what is happening as ice coverage is replaced by water, thereby dramatically lowering the albedo of the earth's surface, concomitantly increasing the amount of solar radiation absorbed at the surface of the planet.

GreenCarCongress notes that:

The authors explicitly disagree with the conclusions of the IPCC, which forsees little or no contribution to 21st century sea-level rise from Greenland and Antarctica. The paper’s authors argue that the IPCC analysis does not account well for the nonlinear physics of wet ice sheet disintegration, ice streams and eroding ice shelves, and point out that the IPCC conclusions are not consistent with the palaeoclimate evidence.

There is significant melting at the poles an on Greenland, and ignoring these phenomena just doesn't seem very smart.  The paper, in short, argues that we must rapidly move beyond even just limiting carbon emissions to agressively sequestering CO2 from the atmosphere.  The best way I can see to do that is with biology.

Farewell PEAR Lab -- You were always overripe.

News in the last few weeks that the Princeton Engineering Anomalies Research Lab -- the PEAR Lab -- is shutting down.  The PEAR Lab, run by Dr. Robert Jahn, the former Dean of Engineering, was by no means celebrated at Princeton.  I spent four years there in graduate school and only heard of the Lab during my last year, in Malcolm Browne's science writing class no less, rather than during all those many hours in Jadwin Hall.

Phillip Ball had a nice retrospective on the Lab in last week's Nature entitled, "When research goes PEAR-shaped."  Ball quotes Will Happer, a professor in the Princeton Physics Department and a member of JASON as saying, "I don't believe in anything [Jahn] is doing, but I support his right to do it."  That's pretty charitable, actually, compared with many of the things said about the lab.  Nature continues to pile it on this week, with another piece: "The lab that asked the wrong questions," by Lucy Odling-Smee.

This is the crux of what was wrong with the PEAR Lab.  In that science writing class, Malcolm Browne occasionally brought in people to be "interviewed" by the class, and one day we had someone in from the Lab.  (My recollection is that it was Jahn himself.)  Can't say I was impressed.  But data is data, and they certainly may actually have measured something interesting, however unlikely that may be.  There are many things we can't yet explain about the universe, and maybe Jahn was on to something.

What I found unfortunate, even unpleasant, in the way the data was presented was the context.  Jahn was represented to us not just as an expert in aeronautics, but also in a whole host of other fields, including quantum mechanics.  And we were offered a physical theory "explaining" one experiment, supposedly a quantum mechanical theory.  Here's the problem: that theory, by its very nature, is wrong.  It is inconsistent in its conception and structure with all the rest of quantum mechanics.  The folks in the PEAR Lab were definitely asking the wrong questions, in a very deep physical sense, by which I mean that everything about the way they tried to explain the data I saw was contradicted by modern physics in fundamental ways. 

According to Dr. Jahn, a random process seems to be the vital ingredient for anomalous interactions between consciousness and machines -- coins flipping, balls dropping through a forest of pegs, even electronic random number generators -- which is what led him to speculate about connections between his data and a successful theory in which measurements are probabilistic: quantum mechanics.  In some interpretations of quantum mechanics, the observer and the system observed are both part of a larger closed system.  Indeed, Dr. Jahn and his colleagues believe that quantum mechanics may be just a part of a larger theory that includes phenomena studied in the PEAR Lab.  If this is so, then one would expect the structure of the two theories to be similar.

The theory we were told about was purported to explain how an observer could, by thinking "slower" or "faster", change the period of a large pendulum, something like 2 meters in length, if I recall correctly.  A brief refresher on the relevant classical physics: the period of an ideal pendulum is determined only by its length and the strength of the force of gravity, at least in the case when oscillation amplitudes are small, and not by its mass, or the kind of bearing it is suspended from, or any other factor. Though friction will eventually damp a real pendulum by changing its amplitude, not its period.   

The mechanism by which human consciousness might change the period is not easy to imagine.  The human observer states the intention either to increase or decrease the period, and as the pendulum interrupts photodiodes on each swing the time is recorded.  But whereas a quantum mechanical model requires a probability for the observer to intentionally alter, here the observer is actually trying to intentionally change the period.

Before I go on (and on), you must be asking "Why spend so much time on this?"  Why bother to debunk bad science at all?  Because the universe is full of strange and wonderful things, and we don't yet understand them all.  That's what makes life interesting.  Besides, I like thinking about quantum mechanics.  Back to the story.

Dr. Jahn claimed his data is consistent with the the human subject affecting the damping of the pendulum's oscillation.  Microscopically, friction might be changed by heating or cooling the bearings of the pendulum (which could be tested by carefully measuring the temperature of the bearing during an experiment) causing the atoms in the bearing to move around more or less, a phenomena well understood in statistical mechanics -- and in fact a probabilistic effect.  However, since the operator was not trying to influence this probability distribution, it is not clear how his or her binary intention of changing the period of the pendulum was converted into changing the amount of friction.  Or perhaps the observer was changing the length of the pendulum, or the overall strength of gravity, or even the local coupling of the earth's mass to that of the pendulum.  Still no obvious connection to any distribution.

When asking a question of a quantum mechanical system, or a quantum mechanical question in the parlance of physicists, it must be one which can be phrased in terms of what is called an "operator."  Energy, momentum, and position are all operators and as such provide tools for asking quantum mechanical questions.  The energy operator, for instance, would be used to ask about the average energy of the atoms in the bearing.  To find an analogy to the pendulum we must look in quantum mechanics to something called a harmonic oscillator, which can be imagined as a ball rolling back and forth at the bottom a parabolic bowl.  Two operators used in asking questions about such a system are the raising and lowering operators, which as their names suggest change the energy of a particle and its period of oscillation.

So, for the sake of argument, let's give the PEAR Lab a quantum mechanical operator that works on a macroscopic pendulum.  It might be imagined that a human consciousness is utilizing some sort of raising and lowering operator by intending to increase or decrease the period of oscillation of the pendulum.  Yet the data is fit by assuming the friction in the bearing is changing.  It is simply not consistent with the structure of quantum mechanics to ask one valid question and get the answer to a different valid question.  Furthermore, it is hard to imagine how a more general theory, one subsuming quantum mechanics -- oh, what the hell, let's just call it "magic" -- could account for asking a question of the period of the pendulum with an operator belonging to the "magic" theory but get an answer which is the result of asking a question with the well known and well loved energy operator of quantum mechanics and which could only describe the microscopic state of the bearing.  So there.

Then there is that little thing called the Correspondence Principle, proven correct time and time again, which says that quantum mechanics works for small numbers of atoms.  As the number grows, save in very special, very strange circumstances like Bose-Einstein condensates, your theory must reduce to classical physics.  Which brings us back to the classical model that the period of the pendulum depends only on its length.  Nothing about the bearing, nothing about the observer.  Moreover, the pendulum is big, and the human subject is big.  Many, many atoms.  No quantum mechanics.  Wrong question!

Did you follow all that?  Does your head hurt?  Sometimes quantum mechanics does that, I assure you.  But I suppose "magic" could account for your headache, too.  We must allow for that.  Somehow.  See the PEAR Lab.

Sometimes the exploration of something that seems silly results in important insights, and the rest of the time it is important to keep the human participants of science honest.  That's the way science works.  And science always wins.

Will anyone be around for a "Cosmological Eschatology"?

Over at Open the Future, Jamais Cascio has compiled a list of 10 Must-Know Concepts for the 21st Century, partially in response to a similar list compiled by George Dvorsky. I'm flattered that Jamais includes "Carlson Curves" on his list, and I'll give one last "harrumph" over the name and then be silent on that point.

Jamais's list is good, and well worth perusing. George Dvorsky's list is interesting, too, and meandering through it got me restarted on a topic I have left fallow for a while, the probability of intelligent life in the universe. More on that in a bit.

I got headed down that road because I had to figure out what the phrase "cosmological eschatology" is supposed to mean. It doesn't return a great number of hits on Google, but high up in the list is an RSS feed from Dvorsky that points to one of his posts with the title "Our non-arbitrary universe". He defines cosmological eschatology through quoting James Gardner:

The ongoing process of biological and technological evolution is sufficiently robust and unbounded that, in the far distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the cosmos.

That is, you take some standard eschatology and add to it a great deal of optimistic technical development, probably including The Singularity. The notion that sentient life could affect the physical course of the universe as a whole is both striking and optimistic. It requires the assumption that a technological species survives long enough to make it off the home planet permanently, or at least reach out into surrounding space to tinker with matter and information at very deep levels, all of which in turn requires both will and technical wherewithal that has yet to be demonstrated by any species, so far as we know.  And it is by no means obvious that humans, or our descendants, will be around long enough to see such wonders in any case; we don't know how long to expect the human species to last. From the fossil record, the mean species lifetime of terrestrial primates appears to be about 2.5 million years (Tavare, et al, Nature, 2002). This is somewhat less than the expected age of the universe. Even if humans live up to the hype of The Singularity, and in 50 years we all wind up with heavy biological modifications and/or downloaded consciences that provide an escape from the actuarial tables, there is no reason to think any vestige of us or our technological progeny will be around to cause any eschatological effects on the cosmos.

Unless, of course, you think the properties of the universe are tuned to allow for intelligent life, possibly even specifically for human life. Perhaps the universe is here for us to grow up in and, eventually, modify.  This "non-arbitrary universe" is another important thread in the notion of cosmological eschatology.  Dvorsky quotes Freeman Dyson to suggest that there is more to human existence than simple chance:

The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. There are some striking examples in the laws of nuclear physics of numerical accidents that seem to conspire to make the universe habitable.

I read this with some surprise, I have to admit. I don't know exactly what Dyson meant by, "The universe in some sense must have known we were coming." I'm tempted to think that the eminent professor was "in some sense" speaking metaphorically, with a literary sweep of quill rather than a literal sweep of chalk. 

Reading the quotation makes me think back to a conversation I had with Dyson while strolling through Pasadena one evening a few years ago. My car refused to start after dinner, which left us walking a couple of miles back to the Caltech campus. While we navigated the streets by starlight, we explored ideas on the way. Our conversation that evening meandered through a wide range of topics, and at that point we had got onto the likelihood that the Search for Extraterrestrial Intelligence (SETI) would turn up anything. Somewhere between sushi in Oldtown and the Albert Einstein room at the faculty club, Dyson said something that stopped me in my tracks

Which brings me, in a somewhat roundabout way, to my original interest: where else might life arise to be around for any cosmological eschatology? It seems to me that, physics being what it is, and biochemistry being what it is, life should be fairly common in the universe. Alas, the data thus far does not support that conclusion. The standard line in physics is that at large length scales the universe is the same everywhere, and that the same physics is in operation here on Earth as everywhere else, which goes by the name of the Cosmological Principle. More specifically, the notion that we shouldn't treat our little corner of the universe as special is known as the Copernican Principle

So, why does it seem that life is so rare, possibly even unitary? In Enrico Fermi's words, "Where is everybody?"

At the heart of this discussion is the deep problem of how to decide between speculative theory and measurements that are not yet demonstrably – or even claimed to be – complete and thorough. Rough calculations, based in part on seemingly straightforward assumptions, suggest our galaxy should be teeming with life and that technological cultures should be relatively common. But, so far, this is not our experience. Searches for radio signals from deep space have come up empty.

One possibility for our apparent solitude is that spacefaring species, or at least electromagnetically noisy ones, may exist for only short periods of time, or at such a low density they don’t often overlap. Perhaps we happen to be the only such species present in the neighborhood right now. This argument is based on the notion that for events that occur with a low but constant probability, the cumulative odds for those events over time make them a virtual certainty. That is, if there is a low probability in any given window of time for a spacefaring race to emerge, then eventually it will happen. Another way to look at this is that the probability for such events not to happen may be near one, but that over time these probabilities multiply and the product of many such probabilities falls exponentially, which means that the probability of non-occurrence eventually approaches zero.

Even if you disagree with this argument and its assumptions, there is a simple way out, which Dyson introduced me to in just a couple of words.  “We could be first,” he said.

“But we can’t be first,” I responded immediately, without thinking.

“Why not?” asked Dyson. It was this seemingly innocuous question, based on a very reasonable interpretation of the theory, data, and state of our measurement capability, that I had not yet encountered and that provided me such important insight. My revelation that evening had much to do with the surprise that I had been lured into an obvious fallacy about the relationship between what little we can measure well and the conclusions we make based on the resulting data.

Despite looking at a great many star systems using both radio and laser receivers, the results from SETI are negative thus far. The question, “Where is everyone?”, is at the heart of the apparent conflict between estimates of the probability of life in the galaxy and our failure to find any evidence of it. Often now called the Fermi Paradox, a more complete statement is:

The size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this belief seems logically inconsistent with the lack of observational evidence to support it. Either the initial assumption is incorrect and technologically advanced intelligent life is much rarer than believed, current observations are incomplete and human beings have not detected other civilizations yet, or search methodologies are flawed and incorrect indicators are being sought.

A corollary of the Fermi Paradox is the Fermi Principal, which states that because we have not yet demonstrably met anyone else, given the apparent overwhelming odds that other intelligent life exists, we must therefore be alone. Quick calculations show that even with slow transportation, say .1 to .8 times the speed of light, a civilization could spread throughout the galaxy in a few hundred million years, a relatively short time scale compared to the age of even our own sun. Thus even the presence of one other spacefaring species out there should have resulted in some sort of signal or artifact being detected by humans. We should expect to overhear a radio transmission, catch sight of an object orbiting a planet or star, or be visited by an exploratory probe.

But while it may be true that even relatively slow interstellar travel could support a diaspora from any given civilization, resulting in outposts derived from an original species, culture, and ecosystem, I find doubtful the notion that this expansion is equivalent to a functioning society, let alone an empire.  Additional technology is required to make a civilization, and an economy, work.

Empires require effective and timely means of communication. Even at the substantially sub-galactic length scales of Earthly empires, governments have always sought, and paid for, the fastest means of finding out what is happening at their far reaches and then sending instructions back the other way to enforce their will; Incan trail runners, fast sailing ships, dispatch riders, the telegraph, radio, and satellites were all sponsored by rulers of the day. Without the ability to take the temperature of far flung settlements – to measure their health and fealty, and most importantly to collect taxes – travel and communication at even light speed could not support the flow of information and influence over characteristic distances between solar systems. Unless individuals are exceptionally long-lived, many generations could pass between a query from the one government to the next, a reply, and any physical response. This is a common theme in science fiction; lose touch with your colonies, and they are likely to go their own way.

So if there are advanced civilizations, where are they? My own version of this particular small corner of the debate is, “Why would they bother to visit?  We’re boring.” A species with the ability to travel, and equally important to communicate, between the stars probably has access to vastly more resources than are present here on Earth. Those species participating in any far-reaching civilization would require faster-than-light technology to maintain ties between distant stars. Present theories of faster than light travel require so-called exotic matter, or negative energy. Not anti-matter, which exists all around us in small quantities and can be produced in the lab, but matter that has properties that can only be understood mathematically. For humans, exotic matter is presently neither in the realm of experiment nor of experiment’s inevitable descendant, technology. 

With all of the above deduction, based on exceptionally little data, we could conclude that we are alone, that we are effectively alone because there isn’t anyone else close enough to talk to, or that galactic civilizations use vastly more sophisticated technology than we have yet developed or imagined. Or, we could just be first. Even though the probabilities suggest we shouldn't be first, it still may be true.

But as you might guess, given our present technological capabilities, I tend toward an alternative conclusion; we could acknowledge our measurements are still very poor, our theory is not yet sufficiently descriptive of the universe, and neither support much in the way of speculation about life elsewhere.

Now I've gone on much too long. There will be more of this in my book, eventually.

Vaccine Development as Foreign Policy

I was fortunate to attend Sci Foo Camp last month, run by O'reilly and Nature, at the Googleplex in Santa Clara.  The camp was full of remarkable people; I definitely felt like a small fish.  (I have a brief contribution to the Nature Podcast from Sci Foo; text, mp3.)  There were a great many big, new ideas floating around during the weekend.  Alas, because the meeting was held under the Chatham House Rule, I cannot share all the cool conversations I had.

However, at the airport on the way to San Jose I bumped into Greg Bear, who also attended Sci Foo, and our chat reminded me of an idea I've been meaning to write about.

In an essay published last year, Synthetic Biology 1.0, I touched briefly on the economic costs of disease as a motivation for developing cheaper drugs.  Building synthetic biological systems to produce those drugs is an excellent example of the potential rewards of improved biological technologies.

But a drug is a response to disease, whereas vaccines are far and away recognized as "the most effective medical intervention" for preventing disease and reducing the cost and impacts of pathogens.  While an inexpensive drug for a disease like malaria would, of course, be a boon to affected countries, drugs do not provide lasting protection.  In contrast, immunization requires less contact with the population to suppress a disease.  Inexpensive and effective vaccines, therefore, would provide even greater human and economic benefit.

How much benefit?  It is extremely hard to measure this sort of thing, because to calculate the economic effect of a disease on any given country you have to find a similar country free of the disease to use as a control.  A report released in 2000 by Harvard and the WHO found that, "malaria slows economic growth in Africa by up to 1.3% each year."  The cumulative effect of that hit to GDP growth is mind-blowing:

...Sub-Saharan Africa's GDP would be up to 32% greater this year if malaria had been eliminated 35 years ago. This would represent up to $100 billion added to sub-Saharan Africa's current GDP of $300 billion. This extra $100 billion would be, by comparison, nearly five times greater than all development aid provided to Africa last year.

The last sentence tells us all we need to know about the value of a malaria vaccine; it could advance the state of the population and economy so far as to swamp the effects of existing foreign aid.  And it would provide a lasting improvement to be built upon by future generations of healthy children.

The economic valuation of vaccines is fraught with uncertainty, but Rappuoli, et al., suggest in Science that if, "policymakers were to include in the calculation the appropriate factors for avoiding disease altogether, the value currently attributed to vaccines would be seen to underestimate their contribution by a factor of 10 to 100."  This is, admittedly, a big uncertainty, but it all lies on the side of underestimation.  And the point is that there is some $20 Billion annually spent on aid, where a fraction of it might be better directed towards western vaccine manufacturers to produce long term solutions.

Vaccine incentives are usually discussed in terms of guaranteeing a certain purchase volume (PDF warning for a long paper here discussing the relevant economics).  But I wonder if we shouldn't re-think government sponsored prizes.  This strategy was recently used in the private sector to great effect and publicity for the X-Prize, and its success had led to considering other applications of the prize incentive structure.

Alas, this isn't generally considered the best way to incentivize vaccine manufacturers.  The Wikipedia entry for "Vaccine" makes only passing reference to prizes for vaccine development.  A 2001 paper in the Bulletin of the World Health Organization, for which a number of experts and pharmaceutical companies were interviewed about ways to improve AIDS vaccine development, concluded, "It was felt that a prize for the development of an AIDS vaccine would have little impact. Pharmaceutical firms were in business to develop and sell products, not to win prizes."

But perhaps the problem is not that prizes are the wrong way to entice Big Pharma, but rather that Big Pharma may not be the right way develop vaccines.  Perhaps we should find a way to encourage a business model that aims to produce a working, safe vaccine at a cost that maximizes profit given the prize value.

So how much would developing a vaccine cost?  According to a recent short article in Nature, funds devoted to developing a malaria vaccine amounted to a whopping measly $65 million in 2003.  The authors go on to to note that, "At current levels, however, if a candidate in phase II clinical trials demonstrated sufficient efficacy, there would be insufficient funding available to proceed to phase III trials."

It may be that The Gates Foundation, a major funder of the malaria work, would step in to provide sufficient funds, but this dependency doesn't strike me as a viable long-term strategy for developing vaccines.  (The Gates Foundation may not be around forever, but we can be certain that infectious disease will.)  Instead, governments, and perhaps large foundations like The Gates, should set aside funds to be paid as a prize.  What size prize?  Of the ~$1-1.5 Billion it supposedly costs to develop a new drug, ~$250 million goes to marketing.  Eliminating the need for marketing with a prize value of $1.5 Billion would provide a reasonable one time windfall, with continued sales providing more profit down the road.

Setting aside as much as $200 million a year would be a small fraction of the U.S. foreign aid budget and would rapidly accumulate into a large cash payout.  Alternatively, we could set it up as a yearly payment to the winning organization.  Spread the $200 million over multiple governments (Europe, Japan, perhaps China), and suddenly it doesn't look so expensive.  In any event, we're talking about a big payoff in both saving lives and improving general quality of life, so a sizable prize is warranted.  I expect $2 Billion is probably the minimum to get international collaborations to seriously compete for the prize.

The foreign policy aspects of this strategy fit perfectly with the goals of the U.S. Department of State to improve national security by reducing poverty abroad.  Here is Gen. Colin Powell, reprinted from Foreign Policy Magazine in 2005 ("No Country Left Behind"):

We see development, democracy, and security as inextricably linked. We recognize that poverty alleviation cannot succeed without sustained economic growth, which requires that policymakers take seriously the challenge of good governance. At the same time, new and often fragile democracies cannot be reliably sustained, and democratic values cannot be spread further, unless we work hard and wisely at economic development. And no nation, no matter how powerful, can assure the safety of its people as long as economic desperation and injustice can mingle with tyranny and fanaticism.

Development is not a "soft" policy issue, but a core national security issue. [emphasis added]  Although we see a link between terrorism and poverty, we do not believe that poverty directly causes terrorism. Few terrorists are poor. The leaders of the September 11 group were all well-educated men, far from the bottom rungs of their societies. Poverty breeds frustration and resentment, which ideological entrepreneurs can turn into support for--or acquiescence to--terrorism, particularly in those countries in which poverty is coupled with a lack of political rights and basic freedoms.

Dr. Condoleezza Rice, in opening remarks to the Senate Foreign Relations Committee (PDF warning) during her confirmation hearings, plainly stated, "...We will strengthen the community of democracies to fight the threats to our common security and alleviate the hopelessness that feeds terror."

Over any time period you might care to examine, it will probably cost vastly less to produce a working malaria vaccine than to continue dribbling out foreign aid.  Even just promoting the prize would bolster the U.S. image abroad in exactly those countries where we are hurting the most, and successful development would have profound consequences for national security through the elimination of human suffering.  Seems like a good bargain.  The longer we wait, the worse it gets.

Here Comes China

The NatureJobs section in this week's Nature has a short news piece on science funding, education, and investment in China:

The US National Science Foundation's Science and Engineering Indicators 2006could perhaps be renamed 'Here Comes China'. The biennial report shows an increasingly international science and technology workforce, with China showing large gains in internal investment in R&D, investment by multinational corporations, and numbers of Chinese nationals earning science and engineering doctorates in the United States.

China has increased its R&D investment 24% per year over the past five years, compared with 4–5% for the United States. This growth, from US$12.4 billion in 1991 to $84.6 billion in 2003, puts the country behind only Japan and the United States. Meanwhile, investment by US-based multinationals into Asian markets outside Japan has more than doubled, from $1.5 billion in 1994 to $3.5 million in 2002, with more than $1 billion going into China alone. Finally, Chinese students earn more US science and engineering PhDs than those of any other foreign nation.

These statistics are impressive, but they tell only one side of the story. What do they mean in terms of jobs and who will get them? The United States, Europe and Japan still produce many PhDs and create a host of jobs. But China is coming on strong. One wild card is whether Chinese PhDs will stay in the United States or return home. While China's PhD production in the United States has increased, PhDs by US white males has dropped from its peak of about 8,900 in 1994 to just over 7,000 in 2003.

It would be premature to say this marks the end of US dominance in science and engineering employment, but it does show that the United States is producing less of its own scientists and may have more difficulty recruiting from abroad as other nations, particularly China, ramp up funding and infrastructure. As the report says, these trends point to a "potentially diminished US success in the increasing international competition for foreign scientists and engineers".

On the Threat of the 1918 Flu

What do you do when a vanquished but still quite deadly foe reappears?  To further complicate the situation, what if the only way to combat not just that particular foe, but also fearsome cousins who show up every once in a while, is to invite them into your house so as to get to know them better?  Chat.  Suss out their strengths and weaknesses.  Sort out the best way to survive an inevitable onslaught.  This is our situation with the 1918 Influenza virus and and its contemporary Avian relatives

Over the last couple of weeks, several academic papers have been published containing the genomic sequence of the 1918 "Spanish" Flu.  These reports also contained some description of the mechanism behind that flu's remarkable pathogenicity.  (Here is the 1918 Influenza Pandemic focus site at Nature, and here is the Tumpey, et al., paper in Science.)  In response, several high visibility editorials and Op-Ed pieces have questioned the wisdom of releasing the sequence into the public domain.

Notably, Charles Krauthammer's 14 October column in The Washington Post, entitled "A Flu Hope, Or Horror?", suggests:

Biological knowledge is far easier to acquire for Osama bin Laden and friends than nuclear knowledge. And if you can't make this stuff yourself, you can simply order up DNA sequences from commercial laboratories around the world that will make it and ship it to you on demand. Taubenberger himself admits that "the technology is available."

I certainly won't debate the point that biological skills and knowledge are highly distributed (PDF), nor that access to DNA fabrication is widely distributed.  However, while I am sure that Dr. Taubenberger is familiar with the ubiquity of DNA synthesis, I seriously doubt he suggested to anyone that it is easy to take synthetic DNA and from it create live, infectious negative strand RNA viruses such as influenza.  I've written to him, and others, for clarification, just to make sure I've got that part of the story correct.

Krauthammer also asserts that, "Anybody, bad guys included, can now create it," and that, "We might have just given it to our enemies."  These statements border on being inflammatory.  They are certainly inaccurate.  The technology to manipulate flu viruses in the lab has been around for quite a few years, but not many research groups have managed to pull it off, which suggests there is considerable technical expertise required.  (I will clarify this point in my blog as I hear back from those involved in the work.)

The other commentary of note appeared in the 17 October New York Times, "Recipe for Destruction", an Op-Ed written by Ray Kurzweil and Bill Joy.  They call publication of the sequence "extremely foolish":

The genome is essentially the design of a weapon of mass destruction. No responsible scientist would advocate publishing precise designs for an atomic bomb, and in two ways revealing the sequence for the flu virus is even more dangerous.

First, it would be easier to create and release this highly destructive virus from the genetic data than it would be to build and detonate an atomic bomb given only its design, as you don't need rare raw materials like plutonium or enriched uranium. Synthesizing the virus from scratch would be difficult, but far from impossible. An easier approach would be to modify a conventional flu virus with the eight unique and now published genes of the 1918 killer virus.

Second, release of the virus would be far worse than an atomic bomb. Analyses have shown that the detonation of an atomic bomb in an American city could kill as many as one million people. Release of a highly communicable and deadly biological virus could kill tens of millions, with some estimates in the hundreds of millions.

These passages are rife with technical misunderstanding and overheated rhetoric.  My response to Joy and Kurzweil arrived late at the Times, but on the same day a number of other letters made points similar to mine.  For the record, here is my letter:

The Op-Ed by Ray Kurzweil and Bill Joy, celebrated inventors and commentators, is misleading and alarmist.
    The authors overstate the ease of producing a live RNA virus, such as influenza, based on genomic information.  Moreover, their assertion that publishing the viral genome is potentially more dangerous than publishing instructions to build nuclear weapons is simply melodramatic.
    The technology to manipulate and synthesize influenza has been in the public domain for many years.  Yet despite copious U.S. government funds available for such work, only a few highly skilled research groups have demonstrated the capability.  Restricting access to information will only impede progress towards understanding and combating the flu.  Obscuring information to achieve security makes even less sense in biology than in software development or telecommunications, fields Kurzweil and Joy are more familiar with.
    Dealing with emerging biological threats will require better communication and technical ability than we now possess.  Open discussion and research are crucial tools to create a safer world.

Dr. Rob Carlson, Senior Scientist, Department of Electrical Engineering, University of Washington, and Senior Associate, Bio-Economic Research Associates

I was, of course, tempted to go on, but alas the Times limits letters to 150 words.  ("Alas" or "fortunately", depending on your perspective.  Of course, I've no such restriction here.)  Kurzweil and Joy commit the same error as Krauthammer of confounding access to DNA synthesis with producing live RNA virus in the lab.  Fundamentally, however, both the opinion pieces are confused about the threat from a modern release of the 1918 Flu virus.  In a Special Report, Nature described the work by Terrence Tumpey at the CDC to recreate and test the virus:

[Terrence Tumpey] adds that even if the virus did escape, it wouldn't have the same consequences as the 1918 pandemic. Most people now have some immunity to the 1918 virus because subsequent human flu viruses are in part derived from it. And, in mice, regular flu vaccines and drugs are at least partly effective against an infection with reconstructed viruses that contain some of the genes from 1918 flu.

Thus, without minimizing any illness that would inevitably result from release of the original flu virus, the suggestion that any such event would be as deadly as the first go round is inaccurate.  To further clarify the threat, I asked Brad Smith, at the Center for Biosecurity and the University of Pittsburgh Medical Center for some assistance.  He returned, via email, with a story less comforting than that in Nature:

Rob,
       
After speaking with my colleagues DA Henderson and Eric Toner, here are my thoughts on this:
       
The 1918 flu was an H1N1 strain.  The most prevalent seasonal flu strain for the last several decades has been based on H3N2.  Note that there are many flavors of any given H and N type, the hemaglutinin and neauraminidase are constantly mutating and each has a series of antigenic sites.  For example, while the recent predominant seasonal flu has been H3N2, each season it is a slightly different H3N2.  We do retain some residual immunity from last year's H3N2, so we do get sick, but only the weakest that are infected die.  This is the difference between common antigenic drift, and the less common antigenic shift to an entirely new H and N that results in a new pandemic flu strain. (You already know this, but I'm just trying to lay it all out.)
   
H1N1 variants had been major annual strains until the 1957 H2N2 pandemic strain emerged, and has continued as a minor annual strain.  (The H3N2 strain emerged as the 1968 pandemic strain.)  It is accurate that a version of H1N1 is a component of the annual trivalent flu vaccine that we use today and some of the internal proteins of H3N2 strains are derived from H1N1 through reassortment.
       
However, most people in the US born after 1957 have never been exposed to H1N1 in the "wild" and most people do not get flu shots either (in the US or worldwide) - so they would not have been exposed to the H1N1 variant in the vaccine.
       
So, I am not completely sanguine that a reintroduction of the 1918 flu virus into today's relatively naive population would be tempered by some degree of residual immunity.  If there is residual immunity, or some effectiveness of today's vaccine and anti-virals, what would that translate into with respect to a decrease in the numbers of people sick and dying?  1918 flu caused 500,000 deaths in the US and perhaps 50 million deaths worldwide over an amazingly short 18 months.  So, even if only a few percent (relative to what happened in 1918) of the people who are infected by an escaped 1918 flu virus died, the toll would be in the millions.
   
This does not mean that the cost/benefit of studying 1918 flu means it shouldn't be studied, but it certainly isn't as de-fanged as one might hope.

-Brad

Truth be told, the diversity of opinions amongst people well educated on the details means we can't really estimate what would happen if the original virus were released.  So what do we do about the this and other threats?  One answer is to spin up a well-funded effort to improve our technical capabilities.

Echoing Senate Majority Leader Bill Frist, Joy and Kurzweil go on call in their Op-Ed for "a new Manhattan Project to develop specific defenses against new biological viral threats, natural or human made."  This is fine and all, but the Manhattan Project is decidedly the wrong model for an effort to increase biological security.  Far better as a metaphor is the Apollo Program; massive and effective but relatively open to public scrutiny.  Quoting briefly from my 2003 paper on how to improve security amidst the proliferation of biological technologies:

Previous governmental efforts to rapidly develop technology, such as the Manhattan and Apollo Projects, were predominantly closed, arguably with good reason at the time. But we live in a different era and should consider an open effort that takes advantage of preexisting research and development networks. This strategy may result in more robust, sustainable, distributed security and economic benefits.  Note also that though both were closed and centrally coordinated, the Manhattan and Apollo Projects were very different in structure. The Apollo Project took place in the public eye, with failures plainly writ in smoke and debris in the sky. The Manhattan Project, on the other hand, took place behind barbed wire and was so secret that very few people within the US government and military knew of its existence. This is not the ideal model for research that is explicitly aimed at understanding how to modify biological systems. Above all else, let us insist that this work happens in the light, subject to the scrutiny of all who choose to examine it.

Which, I think, is quite enough said on this issue (for now).

It's the end of the world as we know it!

The editorial in Science last week (8 April '05), "Twilight for the Enlightenment" (subscription required), laments the challenge to our "confidence in science and in rational methods of thought" by a trend for "some school boards [to] eliminate the teaching of evolution or require that religious versions of creation be represented as 'scientific' alternatives".  Donald Kennedy, the Editor in Chief, is also perturbed that, "In several school districts, geology materials are being rewritten because their dates for Earth's age are inconsistent with scripture (too old)."  This challenge obviously comes from the right.

Interestingly, there is a commentary in last week's Nature by Dick Taverne, a member of the UK House of Lords, entitled, "The new fundamentalism" (again, subscription required).  Lord(?) Taverne (never thought I would type that particular appellation) is concerned about, "The growing influence of 'green' activists who approach environmental issues with a semi-religious zeal and seemingly little regard for evidence".  Taverne suggests that these viewpoints, "Imperil not only the future of the biotech enterprise but also the health of society as a whole".  Thus science is getting it from the left, too.

So it seems the problem is a bit more general than "evangelical Christianity" pushing to smudge the boundary between church and state (Kennedy), or Greens engaging in "scaremongering" that might "allow new technology to be summarily dismissed on the basis of unsubstantiated claims...technologies on which our future health and wealth depends" (Taverne).  Across a broad swath of the political and social landscape, from the right and the left, there is a fundamental turn away from the mindset that has brought us profound increases in our standard of living and profound decreases in human suffering.  Though we have lots of work to do on both points on a global scale, we aren't going to get there by turning away from the Enlightenment.

Then again, perhaps it is just our turn to watch the empire pass.  India, China, Islam in the middle ages; these cultures all had their day in the sun and made choices that let the mantle pass to others.  But with the passion for science and technology throughout Central and East Asia, coupled with education and an excellent work ethic, the wheel may be coming back around.  At least progress will happen somewhere.  We get to choose whether they have all the fun.

Nanobacteria in the News

When I showed Sydney Brenner the first paper claiming a physiological role for nanobacteria ("Nanobacteria: an alternative mechanism for pathogenic intra- and extracellular calcification and stone formation", Kajander and Ciftcioglu, PNAS, 95 (8274-8279), 1998), he just chuckled.  And rightly so, given the expansive claims of that and succeeding papers.  Early claims that 30 nanometer particles visible in electron microscopy experiments contained DNA were challenged when the resulting sequences were shown to be identical to those found in common bacterial laboratory contaminants.  That is, while the original work pointed to some interesting evidence, there wasn't enough meat on the bone to convince people who have been watching biology since its modern beginnings.

Work has continued, however, and now careful studies have demonstrated nano sized objects at the core of structures from human bodies, where those nano objects definitely contain DNA.  In "Evidence of nanobacterial-like structures in calcified human arteries and cardiac valves" (Am J Physiol Heart Circ Physiol 287: H1115-H1124, 2004), Miller et al examine a variety of human tissues removed during surgery and conclude that "nanometer-scale particles similar to those described as nanobacteria isolated from geological specimens and human kidney stones can be visualized in and cultured from calcified human cardiovascular tissue."

The paper describes using light and scanning electron microscopy, immunostaining, and DNA staining to characterize objects 30-150 nm in size that appear in physiological samples.  Interestingly, there is already a commercial antibody available, "8D10", that appears to recognize a ~50-kDa protein only found in tissues and cultures that were observed to contain the nanobacteria.  Moreover, simultaneous immunostaining using 8D10 and DNA staining using PicoGreen revealed that structures cultured from filtered homogenates of human aneurysm contained both protein and DNA.  The most compelling evidence from a traditional biology perspective is that the nanobacteria can be propagated in culture media.  That is, the structures are self-replicating.  Decalcified particles contained structures that appear akin to cell membranes.

By way of acknowledging alternative explanations for their data, the authors note that;

Although a unique nucleic acid sequence remains to be identified from the nanosized particles identified within human arterial tissue in the present report, it is possible that these structures may represent either a variant form of microorganisms or an unrecognized bacterial growth stage such as L-forms, cell wall-deficient bacteria, and/or defective bacteria that have been hypothesized to represent either pleuropneumonic-like organisms or Mycoplasma species, which have been detected in serum of patients with long histories of chronic diseases. They may also represent an Archaea symbiont that requires cell contact or lipids from other cells for growth.

They go to observe their data are consistent with nanobacteria as a cause of disease;

Nanobacteria derived from bovine serum are internalized by human cells and appear to be cytotoxic. Similar internalization of nanolike particles in arterial smooth muscle would be consistent with induction of apoptosis, formation of matrix vesicles, and the inflammatory basis of atherogenesis. An infectious etiology of arterial calcification is consistent with increased lesion formation in experimental models of atherosclerosis.

Note that this text implies nanobacteria may be infectious agents.  Miller et al lay out the test of this hypothesis;

...A definitive cause and effect relationship needs to be established between these nanoparticles and [pathogenesis]. For example, it will be necessary to evaluate severity of calcification and disease progression in the absence, presence and titer of nanoparticles in humans. In the experimental setting, it will require infection of a naïve animal with cultured nanoparticles and subsequent identification of the particles within arterial calcification. Definitive characterization of these unique particles will require isolation and sequencing of genetic material (DNA or RNA).

No doubt the debate over nanobacteria will continue until the above criteria are met, but the Miller paper definitely contributes significantly to the discussion.

In the end, this sort of report illustrates how naive we are about what organisms inhabit the human ecosystem.  We haven't even isolated all the viruses and "normal" bacteria that live in and on humans.  And then something strange like nanobacteria come along.  We have lots of work to do.