The LavaAmp (prototype) is Alive!

This week Biodesic shipped an engineering prototype of the LavaAmp PCR thermocycler to Gahaga Biosciences.  Joseph Jackson and Guido Nunez-Mujica will be showing it off on a road trip through California this week, starting this weekend at BilPil.  The intended initial customers are hobbyists and schools.  The price point for new LavaAmps should be well underneath the several thousand dollars charged for educational thermocyclers that use heater blocks powered by peltier chips.

The LavaAmp is based on the convective PCR thermocycler demonstrated by Agrawal et al, which has been licensed from Texas A&M University to Gahaga.  Under contract from Gahaga, Biodesic reduced the material costs and power consumption of the device.  We started by switching from the aluminum block heaters in the original device (expensive) to thin film heaters printed on plastic.  A photo of the engineering prototype is below (inset shows a cell phone for scale).  PCR reagents, as in the original demonstration, are contained in a PFTE loop slid over the heater core.  Only one loop is shown for demonstration purposes, though clearly the capacity is much larger.

lavaamp.png

The existing prototype has three independently controllable heating zones that can reach 100C.  The device can be powered either by a USB connection or an AC adapter (or batteries, if desired).  The USB connection is primarily used for power, but is also used to program the temperature setpoints for each zone.  The design is intended to accommodate additional measurement capability such as real-time fluorescence monitoring.

We searched hard for the right materials to form the heaters and thin film conductive inks are a definite win.  They heat very quickly and have almost zero thermal mass.  The prototype, for example, uses approximately 2W whereas the battery-operated device in the original publication used around 6W.

What we have produced is an engineering prototype to demonstrate materials and controls -- the form factor will certainly be different in production.  It may look something like a soda can, though I think we could probably fit the whole thing inside a 100ml centrifuge tube.

The prototype necessarily looks a bit rough around the edges as some parts were worked by hand where they would normally be done by machine (I never have liked working with polycarbonate).  We have worked hard to make sure that the LavaAmp can be transitioned relatively seamlessly from prototype quantities, to small lot productions, to high-volume production.  The electronic hardware is designed to easily transition to fabrication as a single IC, all the plastic bits can be injection molded, and the heater core can be printed using a variety of high-throughput electronicss fabrication methods.

Next up will be field trials with a selected group of labs, as well as more work on refining the loading of the loops.

The Economist Debate on the Fuel of the Future for Cars

Last week The Economist ran an online debate considering the motion "Biofuels, not electricity, will power the car of the future".  I was privileged to be invited as a guest contributor along with Tim Searchinger of Princeton University.  The two primary "speakers" were Alan Shaw of Codexis and Sidney Goodman of Automotive Alliances.  Here is my contribution to the debate, in which I basically rejected the false dichotomy of the motion (the first two 'graphs follow):

The future of transportation power sources will not be restricted to "either/or". Rather, over the coming decades, the nature of transportation fuel will be characterised by a growing diversity. The power sources for the cars of the future will be determined by the needs those cars address.

Those needs will be set for the market by a wide range of factors. Political and economic pressures are likely to require reducing greenhouse gas emissions and overall energy use per trip. Individuals behind the wheel will seek to minimise costs. But there is no single fuel that simultaneously satisfies the requirements of carbon neutrality, rapid refuelling, high-energy density for medium- to long-range driving and low cost.

I find it interesting that the voting came down so heavily in favor of electricity as the "fuel" of the future.  I suppose the feasibility of widespread electric cars depends on what you mean by "future".  Two substantial technology shifts will have to occur before electric cars displace those running on liquid fuels, both of which will require decades and trillions.

First, for the next several decades, no country, including the US, is likely to have sufficient electricity generating resources and power distribution infrustructure to convert large numbers of automobiles to electric power.  We need to install all kinds of new transmission lines around the country to pull this off.  And if we want the electricity to be carbon neutral, we need to install vast amounts of wind and solar generating capacity.  I know Stewart Brand is now arguing for nuclear power as "clean energy", but that still doesn't make sense to me for basic economic reasons. (Aside: at a party a few months ago, I got Lowell Wood to admit that nuclear power can't be economically viable unless the original funders go bankrupt and you can buy the physical plant on the cheap after all the initial investment has been wiped out.  Sweet business model.)

Second, the energy density of batteries is far below that of liquid hydrocarbons.  (See the Ragone chart included in my contribution to The Economist debate.)  Batteries are likely to close the gap over the coming years, but long distance driving will be the domain of liquid fuels for many years to come.  Yes, battery changing stations are an interesting option (as demonstrated by Better Place), but it will take vast investment to build a network of such stations sufficient to replace (or even compete with) liquid fuels.  Plugging in to the existing grid will require many hours to charge the batteries, if only because running sufficient current through most existing wires (and the cars themselves) to recharge car batteries rapidly would melt those wires.  Yes, yes -- nanothis and nanothat promise to enable rapid recharging of batteries.  Someday.  'Til then, don't bother me with science fiction.  And even if those batteries do show up in the proverbial "3 to 5 year" time frame, charging them rapidly would still melt most household power systems.

In the long run, I expect that electric cars will eventually replace those powered by liquid fuels.  But in the mean time, liquid fuels will continue to dominate our economy.

The Origin of Moore's Law and What it May (Not) Teach Us About Biological Technologies

While writing a proposal for a new project, I've had occasion to dig back into Moore's Law and its origins.  I wonder, now, whether I peeled back enough of the layers of the phenomenon in my book.  We so often hear about how more powerful computers are changing everything.  Usually the progress demonstrated by the semiconductor industry (and now, more generally, IT) is described as the result of some sort of technological determinism instead of as the result of a bunch of choices -- by people -- that produce the world we live in.  This is on my mind as I continue to ponder the recent failure of Codon Devices as a commercial enterprise.  In any event, here are a few notes and resources that I found compelling as I went back to reexamine Moore's Law.

What is Moore's Law?

First up is a 2003 article from Ars Technica that does a very nice job of explaining the why's and wherefore's: "Understanding Moore's Law".  The crispest statement within the original 1965 paper is "The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year."  At it's very origins, Moore's Law emerged from a statement about cost, and economics, rather than strictly about technology.

I like this summary from the Ars Technica piece quite a lot:

Ultimately, the number of transistors per chip that makes up the low point of any year's curve is a combination of a few major factors (in order of decreasing impact):

  1. The maximum number of transistors per square inch, (or, alternately put, the size of the smallest transistor that our equipment can etch),
  2. The size of the wafer
  3. The average number of defects per square inch,
  4. The costs associated with producing multiple components (i.e. packaging costs, the costs of integrating multiple components onto a PCB, etc.)

In other words, it's complicated.  Notably, the article does not touch on any market-associated factors, such as demand and the financing of new fabs.

The Wiki on Moore's Law has some good information, but isn't very nuanced.

Next, here an excerpt from an interview Moore did with Charlie Rose in 2005:

Charlie Rose:     ...It is said, and tell me if it's right, that this was part of the assumptions built into the way Intel made it's projections. And therefore, because Intel did that, everybody else in the Silicon Valley, everybody else in the business did the same thing. So it achieved a power that was pervasive.

Gordon Moore:   That's true. It happened fairly gradually. It was generally recognized that these things were growing exponentially like that. Even the Semiconductor Industry Association put out a roadmap for the technology for the industry that took into account these exponential growths to see what research had to be done to make sure we could stay on that curve. So it's kind of become a self-fulfilling prophecy.

Semiconductor technology has the peculiar characteristic that the next generation always makes things higher performance and cheaper - both. So if you're a generation behind the leading edge technology, you have both a cost disadvantage and a performance disadvantage. So it's a very non-competitive situation. So the companies all recognize they have to stay on this curve or get a little ahead of it.

Keeping up with 'the Law' is as much about the business model of the semiconductor industry as about anything else.  Growth for the sake of growth is an axiom of western capitalism, but it is actually a fundamental requirement for chipmakers.  Because the cost per transistor is expected to fall exponentially over time, you have to produce exponentially more transistors to maintain your margins and satisfy your investors.  Therefore, Intel set growth as a primary goal early on.  Everyone else had to follow, or be left by the wayside.  The following is from the recent Briefing in The Economist on the semiconductor industry:

...Even the biggest chipmakers must keep expanding. Intel todayaccounts for 82% of global microprocessor revenue and has annual revenues of $37.6 billion because it understood this long ago. In the early 1980s, when Intel was a $700m company--pretty big for the time--Andy Grove, once Intel's boss, notorious for his paranoia, was not satisfied. "He would run around and tell everybody that we have to get to $1 billion," recalls Andy Bryant, the firm's chief administrative officer. "He knew that you had to have a certain size to stay in business."

Grow, grow, grow

Intel still appears to stick to this mantra, and is using the crisis to outgrow its competitors. In February Paul Otellini, its chief executive, said it would speed up plans to move many of its fabs to a new, 32-nanometre process at a cost of $7 billion over the next two years. This, he said, would preserve about 7,000 high-wage jobs in America. The investment (as well as Nehalem, Intel's new superfast chip for servers, which was released on March 30th) will also make life even harder for AMD, Intel's biggest remaining rival in the market for PC-type processors.

AMD got out of the atoms business earlier this year by selling its fab operations to a sovereign wealth fund run by Abu Dhabi.  We shall see how they fare as a bits-only design firm, having sacrificed their ability to themselves push (and rely on) scale.

Where is Moore's Law Taking Us?

Here are a few other tidbits I found interesting:

Re the oft-forecast end of Moore's Law, here is Michael Kanellos at CNET grinning through his prose: "In a bit of magazine performance art, Red Herring ran a cover story on the death of Moore's Law in February--and subsequently went out of business."

And here is somebody's term paper (no disrespect there -- it is actually quite good, and is archived at Microsoft Research) quoting an interview with Carver Mead:

Carver Mead (now Gordon and Betty Moore Professor of Engineering and Applied Science at Caltech) states that Moore's Law "is really about people's belief system, it's not a law of physics, it's about human belief, and when people believe in something, they'll put energy behind it to make it come to pass." Mead offers a retrospective, yet philosophical explanation of how Moore's Law has been reinforced within the semiconductor community through "living it":

After it's [Moore's Law] happened long enough, people begin to talk about it in retrospect, and in retrospect it's really a curve that goes through some points and so it looks like a physical law and people talk about it that way. But actually if you're living it, which I am, then it doesn't feel like a physical law. It's really a thing about human activity, it's about vision, it's about what you're allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe what is possible. So here's an example where Gordon [Moore], when he made this observation early on, he really gave us permission to believe that it would keep going. And so some of us went off and did some calculations about it and said, 'Yes, it can keep going'. And that then gave other people permission to believe it could keep going. And [after believing it] for the last two or three generations, 'maybe I can believe it for a couple more, even though I can't see how to get there'. . . The wonderful thing about [Moore's Law] is that it is not a static law, it forces everyone to live in a dynamic, evolving world.

So the actual pace of Moore's Law is about expectations, human behavior, and, not least, economics, but has relatively little to do with the cutting edge of technology or with technological limits.  Moore's Law as encapsulated by The Economist is about the scale necessary to stay alive in the semiconductor manufacturing business.  To bring this back to biological technologies, what does Moore's Law teach us about playing with DNA and proteins?  Peeling back the veneer of technological determinism enables us (forces us?) to examine how we got where we are today. 

A Few Meandering Thoughts About Biology

Intel makes chips because customers buy chips.  According to The Economist, a new chip fab now costs north of $6 billion.  Similarly, companies make stuff out of, and using, biology because people buy that stuff.  But nothing in biology, and certainly not a manufacturing plant, costs $6 billion.

Even a blockbuster drug, which could bring revenues in the range of $50-100 billion during its commercial lifetime, costs less than $1 billion to develop.  Scale wins in drug manufacturing because drugs require lots of testing, and require verifiable quality control during manufacturing, which costs serious money.

Scale wins in farming because you need...a farm.  Okay, that one is pretty obvious.  Commodities have low margins, and unless you can hitch your wagon to "eat local" or "organic" labels, you need scale (volume) to compete and survive.

But otherwise, it isn't obvious that there are substantial barriers to participating in the bio-economy.  Recalling that this is a hypothesis rather than an assertion, I'll venture back into biofuels to make more progress here.

Scale wins in the oil business because petroleum costs serious money to extract from the ground, because the costs of transporting that oil are reduced by playing a surface-to-volume game, and because thermodynamics dictates that big refineries are more efficient refineries.  It's all about "steel in the ground", as the oil executives say -- and in the deserts of the Middle East, and in the Straights of Malacca, etc.  But here is something interesting to ponder: oil production may have maxed out at about 90 million barrels a day (see this 2007 article in the FT, "Total chief warns on oil output").  There may be lots of oil in the ground around the world, but our ability to move it to market may be limited.  Last year's report from Bio-era, "The Big Squeeze", observed that since about 2006, the petroleum market has in fact relied on biofuels to supply volumes above the ~90 million per day mark.  This leads to an important consequence for distributed biofuel production that only recently penetrated my thick skull.

Below the 90 million barrel threshold, oil prices fall because supply will generally exceed demand (modulo games played by OPEC, Hugo Chavez, and speculators).  In that environment, biofuels have to compete against the scale of the petroleum markets, and margins on biofuels get squeezed as the price of oil falls.  However, above the 90 million per day threshold, prices start to rise rapidly (perhaps contributing to the recent spike, in addition to the actions of speculators).  In that environment, biofuels are competing not with petroleum, but with other biofuels.  What I mean is that large-scale biofuels operations may have an advantage when oil prices are low because large-scale producers -- particularly those making first-generation biofuels, like corn-based ethanol, that require lots of energy input -- can eke out a bit more margin through surface to volume issues and thermodynamics.  But as prices rise, both the energy to make those fuels and the energy to move those fuels to market get more expensive.  When the price of oil is high, smaller scale producers -- particularly those with lower capital requirements, as might come with direct production of fuels in microbes -- gain an advantage because they can be more flexible and have lower transportation costs (being closer to the consumer).  In this price-volume regime, petroleum production is maxed out and small scale biofuels producers are competing against other biofuels producers since they are the only source of additional supply (for materials, as well as fuels).

This is getting a bit far from Moore's Law -- the section heading does contain the phrase "meandering thoughts" -- I'll try to bring it back.  Whatever the origin of the trends, biological technologies appear to be the same sort of exponential driver for the economy as are semiconductors.  Chips, software, DNA sequencing and synthesis: all are infrastructure that contribute to increases in productivity and capability further along the value chain in the economy.  The cost of production for chips (especially the capital required for a fab) is rising.  The cost of production for biology is falling (even if that progress is uneven, as I observed in the post about Codon Devices).&nb sp; It is generally becoming harder to participate in the chip business, and it is generally becoming easier to participate in the biology business.  Paraphrasing Carver Mead, Moore's Law became an organizing principal of an industry, and a driver of our economy, through human behavior rather than through technological predestination.  Biology, too, will only become a truly powerful and influential technology through human choices to develop and deploy that technology.  But access to both design tools and working systems will be much more distributed in biology than in hardware.  It is another matter whether we can learn to use synthetic biological systems to improve the human condition to the extent we have through relying on Moore's Law. 

On the Demise of Codon Devices

Nature is carrying a short news piece by Erica Check and Heidi Ledford on the end of Codon Devices, "The Constructive Biology Company".  I am briefly quoted in the discussion of what might have gone wrong.  I would add here that I don't think it means much of anything for the field as a whole.  It was just one company.  Here is last week's initial reporting by Todd Wallack at the Boston Globe.

I've been pondering this a bit more, and the following analogy occurred to me after I was interviewed for the Nature piece.  Codon, as described to me by various people directly involved, was imagined as a full-service engineering firm -- synthetic genes and genomes, design services, the elusive "bio-fab" that would enable one-stop conversion of design information into functional molecules and living systems.  Essentially, it seems to me that the founders wanted to spin up an HP of biology, except that they tried to jump into the fully developed HP of 1980 or 1990 rather than the garage HP of 1939.  Codon was founded with of order $50 million, with no actual products ready to go.  HP was founded with ~$500 (albeit 1939 dollars) and immediately started selling a single product, a frequency standard, for which there was a large and growing market.  HP then grew, along with it customers, organically over decades. Moreover, the company was started within the context of an already large market for electronics.

The synthetic biology market -- the ecology of companies that produce and consume products and services related to building genes and genomes -- still isn't very big.  A very generous estimate would put that market at $100 million.  This means the revenues for any given firm are (very optimistically) probably no more than a few tens of millions.  (The market around "old style" recombinant DNA is, of course, orders of magnitude larger.)  Labor, rather than reagents and materials, is still likely to be the biggest cost for most companies in the field.  And even when they do produce an organism, or a genetic circuit, with value, companies are likely to try to capture all the value of the learning that went into the design and engineering process. 

This leads to an important question that I am not sure is asked often enough by those who hope to make a living off of emerging biological technologies: Where is the value?  Is it in the design (bits), or in the objects (atoms)?  The answer is a bit complicated.

Given that the maximum possible profit margin on synthetic genes is falling exponentially, it would seem that finding value in those particular atoms is going to get harder and harder.  DNA is cheap, and getting cheaper; the design of genetic circuits (resulting in bits) definitely costs more (in labor, etc.) than obtaining the physical sequence by FedEx. That is the market that Codon leapt into.  If all of the value is in the design process, and in the learning associated with producing a new design, not many companies are going to outsource that value creation to a contractor.  If Codon had a particular design expertise, they could have made a go with that as a business model, as do electronics firms that have niche businesses in power electronics or ASICs.  There are certainly very large firms that design, but do not build, electronics (the new AMD, for example), but they didn't get that way overnight.  They have emerged after a very long (and brutal) process of competition that has resulted in the separation of design and manufacturing.  Intel is the only integrated firm left standing, in part because they set their sights on maintaining scale from day one (see the recent Economist article on the semiconductor industry for a nice summary of where the market is, and where it may be headed). 

In another area of synthetic biology, I can testify with an uncomfortably high degree of expertise that costs in the market for proteins (a very different beast than DNA) are much higher for atoms than for bits.  It is relatively easy for me to design (update: perhaps better phraseology would be "specify the sequence of") a new protein for Biodesic and have Blue Heron synthesize the corresponding gene.  It is rather less easy for me to get the actual protein made at scale by a third party (and it would be even harder to do it myself).  Whereas gene synthesis appears to be a commodity business, contract protein manufacturing is definitely not.  Expression and purification require knowledge (art).  Even if a company has loads of expertise in protein expression, in my experience they will only offer an estimate of the likelihood of success for any given job.  And even if they can make a particular protein, without a fairly large investment of time and money they may not be able to make very much of the protein or ship it at a sufficiently high purity.  Unlike silicon processing and chip manufacturing, it isn't clear that anyone can (yet) be a generalist in protein expression.  Once you get a protein manufacturing process sorted out, the costs quickly fall and the margins are excellent.  Until then: ouch.

So, for DNA bits are expensive and atoms are cheap.  For proteins, bits are cheap and atoms are initially very expensive.  Who knows how much of this was clear to the founders of Codon several years ago; I have only been able to articulate these ideas myself relatively recently.  It is still very early in the development of synthetic biology as a market, and as a sector of the economy. 

"The New Biofactories"

(Update: McKinsey seems to have pulled the whole issue from the web, which is too bad because there was a lot of good stuff in it.  The text of my contribution can be found below.)

I have a short essay in a special edition of the McKinsey Quarterly, What Matters.  My piece is waaaay back at the end of the printed volume, and all the preceding articles are well worth a look.  Other essayists include Steven Chu, Hal Varian, Nicholas Stern, Kim Stanley Robinson, Yochai Benkler, Vinod Khosla, Arianna Huffington, Joseph Nye, and many more.  Good company.

Here is the essay: "The New Biofactories" (PDF), Robert Carlson, What Matters, McKinsey & Company, 2009.

New Microfabrication Methods

As my time at the University of Washington draws to a close, the students and post-docs I have been fortunate to work with are beginning to publish our work together.  I'll soon revise my main web-site, www.synthesis.cc, with links to all the publications related to Microscale Plasma Activated Templating (µPLAT), a new way to fabricate integrated electromechanical circuits in PDMS at room temperature, mostly outside the clean room.  I have spent way too much of my life in the clean room.

The point of all this work is to fabricate capable, yet inexpensive, MEMS and microfluidic devices for handling cells and reagents.  The job is by no means finished, but I'm pretty satisfied with the results so far.  We have managed to create in PDMS; single cell traps using Nickel magnetic relay elements; wires made of graphite, gold, nickel, palladium, silver, and, almost, constantan; electrostatic actuators and valves; thermopneumatic valves and peristaltic pumps; thermocouples made from combinations of patterned metals; plus ways of fabricating easy electrical connections between all these components, and between the circuits and the outside world.

Joseph Chao, now at the Biodesign Institute at ASU,  just sent a link to one of the papers, and I post it here so people can get an idea of what we've been up to for the last several years: "Rapid fabrication of microchannels using microscale plasma activated templating (µPLAT) generated water molds", is now online at Lab on a Chip.

Geoplasma, Plasma Reformation, and Nearly Perfect Recycling

So much for trash.  Plasma conversion is finally coming to the US, according to a story in USA Today.  Why is this worth noting?  Plasma conversion is as close toperfect recycling as we are going to get, at least for the time being.

I looked into this topic extensively a few years ago while working for a consulting firm.  One of our clients was a major auto manufacturer -- to remain nameless -- and I tried to convince the company that their future business model was not exclusively in producing autos, but rather, because of the complexity of introducing new technology and new fuels, in providing "transportation solutions", including hydrogen fuel.  They preferred to keep building petro-powered SUVs.  Perhaps it's time to reconsider that decision.

There's no magic in plasma conversion -- municipal garbage is obviously high in energy.  It is burned rather than stored in many locations.  But plasma reformation is much cleaner than simple incineration.  Trash goes in, and, depending on its composition and energy content, electricity, refined metals, and purified gases come out.  There's no snake oil here; the physics and chemistry work.  The only waste product from reformation consists of silicates, which so far can only be used for building roads and as abrasives for grinding wheels.  The volume of waste, including CO2,  is also much smaller compared to incineration since all the good stuff is reused.

As far as I can tell from the USA Today story, with its limited technical information and references to plasma conversion facilities up and running in Japan, Geoplasma is licensing technology from Startech Environmental Corp for a plant to be built in St. Lucie County, Florida.  Just guessing, though.  Presently, the Geoplasma website consists only of a long video clip that I didn't bother to watch.

(UPDATE - 22 Sept 06 :: Crinu Baila, a Senior VP at Geoplasma, wrote to tell me the following:

Westinghouse Plasma Corporation (WPC) plasma arc technology will be utilized in the Florida project.

WPC’s plasma arc units are reliable, rugged and have amassed close to 500,000 hours of operation in industrial environments.

In addition WPC has coupled the plasma arc units with a robust Plasma Gasification Vessel (PGV) that has the proven capability to process a wide variety of waste materials.

The combined plasma units/PGVs have been used in three commercial applications in Japan.

I don't know if the recycling capabilities I mention in this post are as easy with the WPC units as with the Startech plasma converters, but getting this technology into the market is progress, nonetheless.)

They economics of plasma conversion are compelling.  Getting rid of trash is expensive.  New York City spends somewhere in the neighborhood of $500 million a year exporting its garbage, depending on how you count it up.  The combination of plasma conversion and hydrogen production is especially interesting if you consider the applications to distributed hydrogen production to fuel vehicles.  Here are tidbits from a report I wrote some years ago:

Hydrogen fuel cell powered automobiles are expected to enter production by 2010.  While engineering and production issues associated with the new technology will by definition be solved by the date of introduction, hydrogen fuel itself may not be easy to come by, perhaps limiting sales.  Development of a centralized hydrogen production and distribution capability analogous to today’s petroleum infrastructure would no doubt be extraordinarily expensive, but this investment may not be necessary.   Hydrogen locked up in municipal waste streams can be locally harvested in a distributed system for both stationary and automotive fuel cell use.

A Plasma Converter and gas purifier system from Startech Environmental can produce ~43 liters of hydrogen for each kilogram of municipal trash with a net surplus of energy.  New York City exports ~5.5 million kg (12,000 tons) of trash a day at an annual cost approaching $500 million dollars.  Three years worth of this export cost could be used to purchase sufficient plasma conversion infrastructure to fuel several hundred thousand cars per day from NYC’s trash.  Introduction of this technology could be aided by focusing on fleet operation such as taxicabs, police vehicles, buses, or the military.  Similar opportunities are present in other metropolitan areas, and markets, beyond NYC and will provide a short cut to providing hydrogen for fuel cell powered automobiles.

Yeah, yeah -- I know, switching over to the hydrogen economy is going to be expensive and take forever.  But not if you pick your battles:

There is a popular argument among detractors of hydrogen as a fuel that the expense of developing infrastructure for the hydrogen economy is prohibitive.  They insist that because hydrogen production and pumping stations will cost many billions of dollars to build, whatever the actual need, the realization of a hydrogen economy is far in the future.  So far in the future, so the argument goes, that we need not plan for such an eventuality at all.

The most significant error in this argument is its root premise, that a hydrogen economy is somehow foreign, unfamiliar, and ultimately too expensive.  Quite the contrary, we do not need to develop a hydrogen economy because we already have one.  The challenge is not to build a hydrogen infrastructure from scratch but to better harvest widely distributed energy and hydrogen that we now treat as waste.

A majority of industrial processes in the current economy work by shuttling hydrogen atoms amongst other molecules.  The most obvious of these processes is the burning of hydrocarbons, either for transportation or for the manufacture of other goods, where energy stored in the hydrocarbons is essentially transferred to the finished article or substance.  As a result, many manufactured products contain high energy chemical bonds, and many of those products are thrown out as whole objects.  The stored energy is thus also thrown away.  This trash is highly distributed and its conversion from valued good to waste is most concentrated near population centers.  Considerable further resources are then expended in transporting the waste elsewhere.

According to the New York City municipal budget, for example, the City spends ~$300 million per year to transport ~12,500 tons (5.7 million kg) of municipal waste a day to distant sites (this is in addition to the cost of local waste collection and transfer).  The City’s businesses generate an equivalent daily amount, which is collected by private companies.  This brings the total daily trash output of NYC to approximately 25,000 tons.  The City spends another ~$20 million a year for local “landfill monitoring and leachate control.”  The Economist estimates the total cost of exporting the City’s trash at closer to half a billion dollars a year.  There is clearly an economic opportunity if alternative disposal means can be found.

Even if you ignore the sale of recycled metals and gases, there is significant opportunity in providing fleet vehicles and hydrogen fuel for those vehicles;

“Plasma Conversion” is a process developed by Startech Environmental, of Wilton CT, in which plasma at 30,000° F is used to degrade waste, chemical weapons, etc.  The plasma provides an excess of electrons that chemically reduce complex compounds to their constituent elements.  In effect, a Plasma Converter runs backward the chemical reactions that produced the material in the waste.

Municipal waste is sufficiently energy dense to produce more chemical and electrical energy than is used to “convert” the waste.  Thus some of the recovered energy can be used to run the Plasma Converter.  More relevant for the purposes of this report, Startech has refined the process, with the aid of a ceramic filter, to produce ~7 ft3 of hydrogen gas at 99.999% purity from each pound of garbage (~43 L of hydrogen for each kg of trash).  The volume of trash produced by NYC public services could thus be processed to provide ~235 million L of hydrogen a day.  Adding the privately collected waste would double this amount.  Processing municipal waste from other metropolitan areas could reasonably be expected to produce hydrogen volumes in proportion to their population.

Startech is currently advertising units that process between 5 and 100 tons per day, which cost between $2.5 million and $12.5 million respectively.  Thus for the cost of 4 years worth of trash export fees, ~$1.3 billion, the infrastructure could be assembled to process all of New York City’s municipal trash into raw materials.  Pure hydrogen could be separated for use in fuel cells, and other materials sold to industry.  Trash is currently trucked from local pick-up points to several waste transfer stations.  Trash is then packed in sealed trucks for export.  The export step could be eliminated by locating plasma converters at waste transfer stations.  The one time infrastructure cost could be paid up front or amortized, and the operational costs would certainly be less than continuing trash export fees and would be offset by sales of hydrogen and raw materials.  A single technician can run a plasma converter, and with so many units in one place automation could enable one person to shepherd several units.

The utility of this recovered hydrogen can be estimated by calculating how many vehicles it can power.  The 2002 Hydrogen Fuel Cell powered Ford Focus test bed runs at ~100km/hr for approximately 400 km on 1244 L of hydrogen.  Assuming slightly larger average vehicles and consequent lower efficiency, a very conservative estimate is that the daily trash output of NYC could fuel more than 300,000 vehicles a day traveling several hundred kilometers each.  For example, all of the City’s taxicabs and Police cruisers could be run on each day’s supply of hydrogen produced from municipal trash.

If the numbers are so compelling, even just for arbitraging the inefficiency of exporting and caching trash, why isn't this technology popping up all over the U.S.,?  Back in 2003 or so, I had a chat with the CEO of Startech, and their biggest problem was investment in existing infrastructure.  That is, waste management companies, cities, and counties in the U.S., all h ave huge capital investments in garbage gathering, distribution, and disposal, and most of it has yet to be completely amortized.  In order to get into the market, you have to wait for the investment cycle to tick around to the point that equipment and facilities are being replaced.

So, in the end, a battle lost for me.  But only temporarily.  We'll all be mining garbage dumps relatively soon.

Inventing Throughout Life

Technology Review has a short article by Ed Tenner on the productivity of inventors and scientists as they age, "Megascope: Live Long and Tinker".  The article seems to take seriously the myth that mathematicians and physicists do all their best work before the age of 40.  But most of the experimental scientists and engineers I know, including the majority of biologists I've run into, just get better with age.  It takes quite a while to learn all the tricks of the trade and to accumulate enough knowledge to start putting together pieces that don't obviously fit.  The same, I find, is true of inventing.  The more you know, and the more skills you acquire, the more you are able to produce.

This doesn't mean the process of inventing gets any faster, unfortunately...