Here are my written comments for the recent NASEM workshop “Artificial Intelligence and Automated Laboratories for Biotechnology: Leveraging Opportunities and Mitigating Risks”, convened at the request of the Congressionally-chartered National Security Commission on Emerging Biotechnology (NCSEB), in April, 2024.
The document is composed of two parts: 1) remarks delivered during the Workshop in response to prompts from NASEM and the National Security Commission for Emerging Biotechnologies and 2) remarks prepared in response to comments arising during the Workshop.
These comments extend and document my thoughts on the reemergent hallucination that restricting access to DNA synthesis will improve security, and that such regulation will do anything other than constitute perverse incentives that create insecurity. DNA synthesis, and biotechnology more broadly, are examples of a particular kind of distributed and democratized technology. In large markets, served by distributed and accessible production technologies, restrictions on access to those markets and technologies incentivize piracy and create insecurity. There is no data to suggest regulation of such technologies improves security, and here I document numerous examples of counterproductive regulation, including the perverse incentives already created by the 2010 DNA Synthesis Screening Guidelines.
Let’s not repeat this mistake.
Here are a few excerpts:
Biology is a General Purpose Technology. I didn't hear anyone at this meeting use that phrase, but all of our discussions about what we might manufacture using biology, and the range of applications, make clear that we are talking about just such a thing. The Wikipedia entry on GPTs has a pretty good definition: “General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level). GPTs have the potential to drastically alter societies through their impact on pre-existing economic and social structures.” This definitely describes biology. We are already seeing significant economic impacts from biotechnology in the U.S., and we are only just getting started.
My latest estimate is that biotechnology contributed at least $550B to the U.S. economy in 2021, a total that has steadily grown since 1980 at about 10% annually, much faster than the rest of the economy. Moreover, participants in this workshop outlined a future in which various other technologies—hardware, software, and automation, each of which is also recognized as a General Purpose Technology, and each of which contributes significantly to the economy—will be used to enhance our ability to design and manufacture pathways and organisms that will then themselves be used to manufacture other objects.
The U.S. invests in many fields with the recognition that they inform the development of General Purpose Technologies; we expect that photolithography, or control theory, or indeed machine learning, will each have broad impact across the entire economy and social fabric, and so they have. However, in the U.S. investment in biology has been scattershot and application specific, and its output has been poorly monitored. I do have some hope that the recent focus on the bioeconomy, and the creation of various Congressional and Executive Branch bodies, directed to study and secure the bioeconomy, will help. Yet I am on my third White House trying to get the economic impact of biotechnology measured as well as we measure virtually everything else in our economy, and so far the conversation is still about how hard it is to imagine doing this, if only we could first decide how to go about it.
If we in the U.S. were the only ones playing this game, with no outside pressure, perhaps we could take our time and continue fiddling about as we have for the last forty or fifty years. But the global context today is one of multiple stresses from many sources. We must have better biological engineering and manufacturing in order to deal with threats to, and from, nature, whether these are zoonotic pathogens, invasive species, or ecosystems in need of resuscitating, or even rebooting. We face the real threat of engineered organisms or toxins used as weapons by human adversaries. And some of our competitors, countries with a very different perspective on the interaction of the state and political parties with the populace than we have in the U.S., have made very clear that they intend to use biology as a significant, and perhaps the most important, tool in their efforts to dominate the global economy and the politics of the 21st century. So if we want to compete, we need to do better.
…
In summary, before implementing restrictions on access to DNA synthesis, or lab automation, or machine learning, we must ask what perverse incentives we will create for adaptation and innovation to escape those restrictions. And we must evaluate how perverse incentives may increase risks.
The call to action here is not to do nothing, but rather to be thoughtful about proposed regulation and consider carefully the implications of taking action. I am concerned that we all too frequently embrace the hypothetical security and safety improvements promised by regulation or proscription without considering that we might recapitulate the very real, historically validated, costs of regulation and proscription. Moreover, given the overwhelming historical evidence, those proposing and promoting regulation should explain how this time it will be different, how this time regulation will improve security rather than create insecurity.
Here I will throw down the nitrile gauntlet: would-be regulators frequently get their thinking backwards on regulatory policy. I have heard more than one time the proposition “if you don't propose an alternative, we will regulate this”. But, given prior experience, it is the regulators who must explain how their actions will improve the world, and will increase security, rather than achieve the opposite.1 Put very plainly, it is the regulators' responsibility to not implement policies that make things worse.
1 In conversations in Washington DC I also frequently hear “But Rob, we must do something”. To which I respond: must we? What if every action we contemplate has a greater chance of worsening security than improving it? Dissatisfaction with the status quo is a poor rationale for taking actions that are reasonably expected to be counterproductive. Engaging in security theater that obscures a problem for which we have yet to identify a path forward is no security at all.