Asterisk: Can you tell me a little bit about yourself and your lab and the work that you do there?
Kevin: I am an associate professor at the MIT Media Lab, which is a place for people whose work does not fit in any single discipline. At my lab, called the Sculpting Evolution Group, we are interested in advancing biotech safely. We study the evolution of molecular systems over time and ways of applying selective pressure to make them do what we want and keep doing what we want.
I also have a bit of a security mindset. In cybersecurity there’s a saying: any system vulnerable to accidents is helpless against deliberate attack. Wherever it came from, SARS-CoV-2 was an accident. It was either a natural or accidental release, but it was not deliberate, because anything deliberate would be more severe. That suggests that if and when we learn how to build harmful things with pandemic-class capabilities, we’re going to be in trouble. Lots of people are going to be able to cause pandemic-class events, and the rest of us are not going to be able to do much to defend against them.
A: Why are you so concerned about this possibility?
K: COVID-19 was an accident that rolled back global development by a couple of years. This is a virus that is less than 1% lethal. Imagine what would happen if you raise the lethality rate by a factor of 10 or 20 or 50. It’s debatable how quickly natural selection would favor something that is less lethal, but suffice it to say, it would not happen fast enough for humanity’s liking.
Right now we don’t know of any viruses that would cause new pandemics if released. But it’s also true that at least 30,000 people can assemble an influenza virus from scratch. If people identify a new influenza virus that they think can cause a pandemic and share that information with the world, and if that pandemic could kill more than several million people (like COVID has), then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality.
A: My understanding is that the U.S. Government is currently funding research programs to identify new potential pandemic-level viruses.
K: Unfortunately, yes. The U.S. government thinks we need to learn about these viruses so we can build defenses — in this case vaccines and antivirals. Of course, vaccines are what have gotten us out of COVID, more or less. Certainly they’ve saved a ton of lives. And antivirals like Paxlovid are helping. So people naturally think, that’s that’s the answer, right?
But it’s not. In the first place, learning whether a virus is pandemic capable does not help you develop a vaccine against it in any way, nor does it help create antivirals. Second, knowing about a pandemic-capable virus in advance doesn’t speed up research in vaccines or antivirals. You can’t run a clinical trial in humans on a new virus of unknown lethality, especially one which has never infected a human — and might never. And given that we can design vaccines in one day, you don’t save much time in knowing what the threat is in advance.
The problem is there are around three to four pandemics per century that cause a million or more deaths, just judging from the last ones — 1889, 1918, 1957, 1968 and 2019. There’s probably at least 100 times as many pandemic-capable viruses in nature — it’s just that most of them never get exposed to humans, and if they do, they don’t infect another human soon enough to spread. They just get extinguished.
What that means is if you identify one pandemic-capable virus, even if you can perfectly prevent it from spilling over and there’s zero risk of accidents, you’ve prevented 1/100 of a pandemic. But if there’s a 1% chance per year that someone will assemble that virus and release it, then you’ve caused one full pandemic in expectation. In other words, you’ve just killed more than 100 times as many people as you saved.
A: Is 1% your actual best guess of the chance that a newly identified zoonotic virus would be released with current technology?
K: If identified as pandemic capable and from one of the families where virus assembly works, which is most of them, our current estimates range from between 0.5% and 3% per year. It’s hard to judge because we know of only one historical example of a person who, if active today, definitely would do it, given access to the knowledge. That’s Seiichi Endo of Aum Shinrikyo,
who was a graduate-level virologist out of Kyoto University. Aum wanted to obtain Ebola for use against civilians. Any graduate-trained virologist at Kyoto University today could assemble pretty much any of these viruses. Endo would have the skills — and the cult's budget certainly would have provided the resources. But honestly, it’s so cheap these days that pretty much anyone with the relevant skill set makes enough money in their personal salary that they could afford the cost of the relevant reagents.
In addition to identifying pandemic-capable viruses, the other form of dangerous research is so-called gain of function, which is probably better termed virus transmissibility enhancement research. This is where scientists take viruses that are bad at transmitting human to human, but really good at killing you if they infect you, then try to engineer and evolve them to be more transmissible.
A: My understanding is that we cannot point to a lot of concrete benefits to this kind of research. Is that correct?
K: I cannot think of a single benefit from any kind of virus enhancement research or pandemic virus ID research. It’s not important for developing vaccines and it has not been relevant to developing any antivirals. Nor has it focused attention or effort on the development of particularly effective countermeasures. Unless you have 100 million vaccine doses ready to go, something that spreads rapidly through the air traffic network is going to be too fast for us to get control of.
A: That leads me to another question I had: a lot of your threat model seems to be about deliberate multisite release — someone releasing a virus in a bunch of airports, right?
K: That’s right. You could argue that that idea itself is an info hazard, but I struggle to believe that anyone capable of correctly assembling a virus would not think about releasing it in some place like an airport, presumably more than one airport.
I’m cynical enough to think that there are people like Seiichi Endo out there and that they’re not just restricted to apocalyptic cultists. Certainly there are people like the Unabomber, who wanted to bring down the industrial system, which necessarily involves billions of people dying. This is someone who was good enough to become a mathematics professor at Berkeley. Would a modern day Ted Kaczynski study virology to learn how to manufacture a pandemic himself? Maybe.
A: This seems to require a high level of logistical competence on the part of terrorists. If this is so feasible with current tech, why hasn’t it happened yet? And why haven’t we seen more than one credible attempt?
K: The reason why we haven’t seen any credible attempts with pandemic-capable viruses is we haven’t had any pandemic-capable viruses to use. We still don’t know of any. The logistics of a “normal biological” attack — think of anthrax, botulism, tularemia — are difficult because you need to make a lot of it, purify it and disperse it over a large area. But that means it’s more like a chemical weapon — it doesn’t take advantage of biology’s strength, which is self-replication.
So why hasn’t it happened? The capability, thankfully, isn’t there yet, but at some point it will be. And that’s the hardest part: everything we do to try to keep this knowledge locked away is a matter of buying time. All we can do is delay. There are too many advances happening in too many different areas of biology to lock away that capability indefinitely. We’re going to have to deal with a world where there are instructions for making pandemic agents that are accessible to researchers who can acquire the necessary DNA comprising the genome of that agent.
A: Let’s talk about delay, then. How do you think we can delay scientists from discovering these pandemic-capable agents?
K: There are two ways that are the most promising.
Number one: we can find a way to make people liable for causing catastrophe. We can set the bar very high, say, something like 10 million deaths worldwide — direct or indirect — caused by some event for which you were clearly responsible. For example, if a scientist conducts research that is then used as a blueprint by somebody else, that would certainly qualify. Accidental releases would qualify too. Then you combine that with some sort of requirement for insurance, or even require general liability insurance to cover this. If institutions had to have insurance that factored in the potential negative externality cost of doing research on, say, viruses that could cause pandemics, their insurance premiums would be way higher than they are now. Then that means that governments, if they wanted to fund this kind of research, would have to throw a lot more money at it.
A: So one approach is to make dangerous research more expensive. What’s the second?
K: The other form is more radical. The international community has agreed that nuclear weapons must never fall into the hands of non-state actors or terrorist groups. Pandemic viruses can kill more people than any nuclear weapon. Therefore, the same logic demands that we keep them out of the hands of terrorists.
If anyone credibly identifies a pandemic-capable agent, then they just handed it to tens of thousands of people. That’s far worse than any degree of nuclear proliferation. Therefore, we can’t allow that. We could enact a pandemic test ban treaty that specifically bans the laboratory experiments required to increase our confidence a given virus could cause a pandemic.
A: You use this language of nuclear risk a lot — a test ban treaty, nuclear equivalent threats. I’m curious what you think about the extent that the lessons we’ve learned from nuclear threats are relevant to pandemic risk.
K: So here’s the really funny thing. The Asilomar Conference on Recombinant DNA
was held primarily for two reasons. Number one, the general public was afraid that recombinant DNA would lead to the next atom bomb. And scientists were not certain that recombinant DNA might create a fitness advantage that would allow it to spread in the wild and cause harm, particularly if applied to viruses that could cause pandemics. That caused molecular biologists to declare a moratorium on their own field until Asilomar, at which they concluded that, as best we can tell, we don’t know how to create something that is fitness positive in the wild.
A: It’s an interesting contrast.
K: That has held ever since then. But 30 years later, you have the editor in chief of Science essentially saying that the only way they weren’t going to publish the genome of the 1918 influenza, which killed 50 million people, was if the federal government classified it.
A: I’m thinking also of the 2014 gain-of-function moratorium and all of the pushback against that. It seems to me, from the outside, that there has been something of a culture shift in biology since the ’70s. I’m wondering if you have thoughts on what caused that and what tools are available to help us shift back to a more security-conscious place.
K: Honestly, I think it’s going to be too late. I don’t think the norms can change quickly enough. Even if they could, there are too many advances, and it’s not often immediately clear how an advance can be misused. It’s hard to turn down plausible new ways of saving people from cancer, heart disease or aging just because there is a chance that it might lead to another way to make pandemic class agents.
Historically, it’s been difficult to say, “We need to change the rules now,” because in the past science was always net positive. But now the risk of catastrophe is so high that we just can’t afford to keep playing in the sandbox. Still, saying, “It’s too dangerous and we need to stop” — that’s a hard sell, especially for people who became scientists; the primary trait that drives them is curiosity. But there is a subset of threats where it does not matter how much you learn about it; you cannot counter it. And, unfortunately, pandemic-class agents appear to be in this category.
Learning more about the details of how these things work on a molecular level might well help us develop vaccines and antivirals. But vaccines and antivirals cannot help us contain a deliberately released pandemic. It doesn’t matter if you can invent a perfect vaccine that is super easy to make. You cannot manufacture and distribute faster than the pandemic is going to spread. There is just no way that biotech can help defend against catastrophic, deliberate pandemics, other than in diagnostics, figuring out where it is in order to try to tamp it down. Early warning is all you can do. What that means is that fighting pandemics and preparing for future pandemics and ensuring that that kind of event is something that we can reliably defend against is a job for physical scientists and engineers. It’s a job for protective equipment. It’s a job for germicidal light. It’s a job for cryptographic methods of telling people how much risk they’re at based on their connection network.
Right now most scientists are really not going to notice if the thing that they’re working on happens to provide the key that will allow individuals to murder millions. USAID’s DEEP VZN program had never considered the possibility that the viruses that they discover and post on their rank-ordered list by threat level would be misused.
A: On that point — we’ve talked a lot about delaying pandemics, but we haven’t spoken about Secure DNA yet.
K: The basic idea of Secure DNA is this: the reagents required to assemble a virus are commonly available and cheap. There’s no possible way of controlling them because they’re required for basically all biomedical research, with one exception: in order to make a virus, you need the DNA encoding its genome. If we can prevent people from getting the DNA corresponding to particularly nasty viruses, then we can at least ensure that the risk from non-state actors is pretty minimal.
Now, this has been recognized for some time. A lot of folks rang the alarm bell on this way back in 2007. The leading companies then took it really seriously. It’s really one of those shining examples of industry doing the right thing. The five leading gene synthesis providers at the time came together and decided they would screen orders for hazards and screen customers to make sure that they’re legitimate people doing legitimate research. They formed what’s called the International Gene Synthesis Consortium, and they claim that they screen about 80% of global synthetic DNA. They do it even though it costs them significant amounts of money to do that screening — as it requires expert biologists to take a look at all the false alarms that the screening algorithms throw up because a lot of biology is very similar to other biology.
The problem is that 80% is not 100%. So in terms of its effectiveness at actually preventing access to hazardous DNA, it definitely leaves something to be desired.
So we thought there had to be a better way to do screening than similarity search. How about we figure out the signature of a hazard in terms of exact sequences, calculate functional variance (that is, other sequences that could substitute for that particular signature), compare all of those to everything else ever sequenced, and throw out the sequences that match something unrelated to a hazard? Then all we need to do is check incoming orders for whether they match any of the fragments that define all of the things that we think are hazardous. That’s way more computationally efficient and it doesn’t raise these false alarm problems. That way we can screen orders without knowing what’s in them, and we can also screen for hazards without knowing what they are.
That means that in principle, humanity could crowdsource threat identification. Instead of warning the world about a new threat, scientists who are very concerned about a particular way that biology could be used to cause harm could contact a curator of the secure DNA system and say, “I’m really worried about this.” If the curator agrees, they can add it to the database along with a suitable number of decoys. Then synthesizers around the world would refuse to make that thing unless the ordering scientist had permission from their biosafety committee.
A: And nobody ever has the full list of which viruses are dangerous.
K: And no one learns other than the person who came up with the threat and the curator.
A: Let’s say we’ve delayed as long as possible and something gets through. Our best bet is to be able to detect it early, which I know is something else that you’re working on.
K: Suppose someone does find something nasty. Or suppose there’s not something publicly known — some state biological weapons program that comes up with something nasty. What if it’s like HIV? It’s not obvious that it’s spreading. You’re not necessarily going to see things in the clinic any time soon. So how could we have detected something like HIV? Well, we know that it spread worldwide through the air traffic network. So what you want to do is monitor the air traffic network. The problem is you don’t necessarily know what that hazard is going to look like.
So you have to look for some trait that is universal to threats. And when it comes to biology, all serious biological threats must be able to spread on their own, typically in an exponential growth pattern. So can you look for the pattern of exponential growth? The answer here is yes, you can, if you sequence all the nucleic acids that are present, and then look for specific sequences of unique fragments that have appeared, ideally across multiple monitoring sites. Then you can pull out those reads and say, “What is that? Does it look like it’s a threat? Do we need to take action?” This will provide us with reliable early warning of any biological threat that’s spreading human to human.
You can extend that approach to sequencing rivers. All the DNA in a watershed washes down into a river. That would allow you to detect things like gene drives in the environment as well. So the combination of air traffic network, untargeted metagenomic sequencing and environmental untargeted metagenomic sequencing would provide us with reliable early warning of anything threatening. It does not matter what it is. It does not matter if it’s an extraordinarily competent adversary. It doesn’t matter if it’s a superhuman adversary. We will still be
able to see it.
A: All right, so we’ve detected it. And then what do we do then? How do we protect ourselves?
K: You need to figure out how far it’s spread, because exponential growth detection is not as sensitive as looking for a particular thing. But the CDC, for example, has finally gotten on the ball enough to have this wastewater-monitoring network for COVID strains across most American cities of any decent size. Many other nations have similar sorts of networks. The next step, if we see something in the observatory, is to alert bodies like the DOD and CDC. The CDC can then tell all of its monitoring sites, “Look for this new thing. Here are some primers that you can use to amplify it and detect it.” Then you figure out where it is in every town above, I don’t know, 100,000 people. Then you drill down in the town where it’s present, develop diagnostics and figure out who has it — assuming it’s in people rather than something in the environment. Then you need to limit spread using standard anti-pandemic containment measures. Once we know the sequence of the hazard, which is what the observatory tells us, and we can design versions of these diagnostics that can sense it, we need the manufacturing capacity to scale those up really fast. In the meantime, we might need to do a lockdown in the cities that have it.
Your next question: what do we do to stop it? I’m speaking here from the perspective of an agent that could conceivably cause civilizational collapse. Suppose COVID had a 90% mortality rate. I can imagine people refusing to go out. That’s good for curtailing the spread of the pathogen, but people need food and water and power at a minimum. We probably need law enforcement too — some kind of order.
Society could still function without health care in an extreme emergency. Many people would die without the health care system, yes, but we can do without it. But the people who are responsible for producing and distributing food, water and power absolutely must be willing to keep doing their jobs. That means that we have to give them good enough protective equipment.
So we need 30 million suits of protective equipment that requires zero training that can be delivered to them all within days, and that will reliably keep them from getting infected with anything that we think is nasty enough to warrant this kind of response.
There is just no way that biotech can help defend against catastrophic, deliberate pandemics, other than in diagnostics — figuring out where it is in order to try to tamp it down.
A: When you’re imagining that next-generation totally reliable PPE, what does that look like?
K: There’s a couple of ways of doing it. The simplest version is a headpiece that ideally has complete plastic all the way across the front so you can see the face, covers the back of the head, and has some sort of clasp around the neck. It doesn’t need to be very tight because you’re creating positive pressure by pumping air through a HEPA filter into the inside. We can probably improve it by adding, say, LEDs that emit ultraviolet light to help sterilize the air going through. It needs to be comfortable. Ideally it needs to be stylish — you want as many people to be willing to wear it as possible, certainly in the early days. And it needs to be possible to take it off without self-contaminating and then infecting yourself. There also needs to be some way of sterilizing the equipment so that you can wear it again the next day — germicidal light is our best bet.
And that is our other best defense. Low-wavelength light between 200 and 230 nanometers is germicidal. It destroys viruses and bacteria, but it doesn’t appear to hurt multicellular organisms because it’s absorbed by proteins. Preliminary studies suggest even high exposures to this kind of light are safe. If we were to install these low-wavelength lights indoors, continuously and at a background level, under the current safety guidelines, it would reduce the amount of aerosolized pathogen in the air by 99% inside of five minutes. It could basically eliminate most aerosol- and contact-based transmission. What it wouldn’t do is hit aerosols and the respiratory droplets from person-to-person transmission.
We also don’t have generation mechanisms that could flick on and give us a higher dose. But if we can make LEDs that can do this, we could listen for two different voices in a room. When they’re talking, the light switches to higher intensity. That could, in theory, inactivate the viruses before they move between two people in close conversation. We’re not sure yet that this works, and we need to run comprehensive safety studies.
But it’s incredibly promising because anything sufficient to prevent a serious future pandemic could probably also prevent the vast majority of the pathogens that infect us day-to-day. If we can actually harden our spaces to make them immune to transmission of pandemic viruses, then we’ve also just eliminated virtually all infectious disease. U.S. employers lose $300 billion a year to lost productivity from illness, specifically from infectious agents. That’s $300 billion a year that could be saved.
Bluntly, we’re not going to spend a lot of money on pandemic preparedness. No country in the world has. Maybe that will change if there’s a deliberate attack first; people psychologically respond to attacks from other humans more than we do from natural catastrophes. I think the U.S. government has been unusually incompetent by failing to invest in pandemic preparedness, but that’s the state of affairs for basically every nation in the world, with very few exceptions. But if we can address annual ongoing economic losses from standard infectious agents, that could convince people to install these things everywhere. Then we would be ready for the next pandemic.
A: Since you mentioned the US government, I’m curious what you see as the biggest obstacles to implementing any of the ideas you’ve proposed. I’m also interested in how the response to the COVID pandemic has changed your views on what a response would realistically look like.
K: The COVID pandemic has shown how difficult pandemic response will be if people don’t believe there is a real threat. Imagine that the pathogen was something like HIV, which can circulate widely before anyone becomes symptomatic. Experts tell everyone about this new virus spreading across the globe that is like HIV and needs to be stopped. But no one has gotten sick yet. I think a lot of people would decline to believe the experts in that scenario.
The other lesson is that American institutions flat-out failed. I don’t mean politically. I mean the CDC and the FDA themselves have arguably made the situation worse.
A: Just to clarify, this is about speed to approve tests and vaccines?
Kevin: Yes. The CDC and the FDA ensured that tests developed in many different universities simultaneously could not be used. There was the mess that was mask advising. The vaccines could have been approved faster if we had run challenge trials. I’m not confident in that assessment, but there is a significant probability that more Americans would be alive today if we had suspended the CDC and the FDA at the onset of the pandemic. On the whole, I think the FDA does a reasonable job of balancing benefits and risks for standard things, but in an emergency situation like a pandemic when you have to move fast — because every day you delay many thousands of people are dying — you just can’t afford to have the same people governing the response. I don’t think there’s a human psyche on the planet that could manage that rapid flip. I would really like to see a separate system where power is formally transferred once an emergency is declared to people whose job it is to wait around and plan for emergencies.
A: And on that cheerful note, is there anything else you’d like to say?
K: I think the overall message has to be one of optimism. We still don’t know of any capable agents. And it looks a lot like we can build technologies and launch them in plausible ways that don’t necessarily require governments to respond or governments to take action using taxpayer dollars. It’s possible that the bulk of the problem could be solved philanthropically, at least if you can get a few tens of billions of dollars. That’s never been done before but it might be feasible in the wake of COVID, and if the tech can be proven to work.
We can build a world in which we don’t have to fear the catastrophic misuse of biotechnology. And we have a road map for doing it. There’s possibly more technologies that I missed or things that will be advanced. But we actually know that we have a problem and there is a clear and concrete set of definable potential solutions to that problem. There are multiple things that could solve the problem. Even if some of them don’t work out, we’ll still be OK. That’s tremendously encouraging.