We dont need more virus
For decades, gain-of-function (GoF) research was a quiet, technical term within virology referring to the modification of organisms to better understand biological processes. In recent years, however, the phrase has exploded into global consciousness, sparking controversy, political division and scientific introspection.
At the forefront of this conversation is Professor Simon Wain-Hobson, emeritus professor of Virology at the Institute Pasteur, best known for sequencing the first full HIV genome and, more recently, for his outspoken criticism of dangerous gain-of-function (GoF 2.0) research, talks candidly to Zoe Rutherford about the failures of oversight, the psychology of risky science, and why the research community must draw clear lines.
ZR: How do you define the difference between GoF 1.0 and GoF 2.0? Is it an entirely fatuous concept, this tool being what you make it, or are there useful delineations between contributions to public health, such as in vaccine development or understanding pathogen behaviour and threats to humanity?
SWH: That’s an essential starting point. GoF 1.0 has been around for decades. It’s just shorthand for experiments that help us understand how genes and organism’s work. You add a function, see what happens. That’s benign. GoF 2.0, however, is entirely different – it refers to the deliberate engineering of viruses to make them more transmissible, more pathogenic, or both. That crosses a line. I call this dangerous GoF. It became a real concern after work in the mid 2000s and again in 2011, when Fouchier1 and Kawaoka2 published research on making avian flu strains transmissible between mammals ferrets, to be specific. These are not hypothetical risks, they made viruses that don’t exist in nature. That’s not theoretical. It’s creating potential pandemic pathogens. The term may sound technical, but thereality is chilling. It’s why the 2019 US executive order used the term dangerous GoF. I think that’s exactly right. People understand the word dangerous, they don’t need a PhD to grasp the implications.
ZR: Do you think rebranding it as GoF softened its image?
SWH: Absolutely. Before, it was called dual use research of concern. Not exactly a friendly phrase. Then came the rebranding. Gain-of-function sounds almost positive, doesn’t it? As if we’re making something better. But language matters. This wasn’t a scientific necessity – it was a public relations move. It made something controversial sound palatable.
ZR: Has there been a difference in the type of experimental methodologies done across the contiguous US and those done in other countries? Is there a growing appreciation of safer methodologies?
SWH: No. Techniques are largely the same. What varies is the oversight and political climate. Fouchier and Kawaoka were working with US funding, and Chinese labs followed with similar experiments. So, this isn’t a geographical problem, it’s a systemic one. The science doesn’t vary dramatically by country. The variable is the political will to regulate it – and that’s a serious issue.
ZR: Has there been any successful GoF 2.0 research that’s helped us prevent a pandemic?
SWH: None. Zero. That’s the big myth. GoF 2.0 has produced theoretical models and a lot of headlines, but no actual breakthroughs in predicting or stopping pandemics. It hasn’t contributed in any meaningful way to emergency preparedness, resilience and response. Scientists will say, “this work might help us understand future viruses”, but what they’re really doing is justifying their grant applications. It’s institutional inertia – and it’s dangerous.
ZR: If it’s not helping, why does it continue?
SWH: Because it’s easy. It’s easilyfunded, easily published, and great for careers. Labs are like small businesses they need grants to survive. If you pitch your work as pandemic relevant, you’re more likely to be funded. There’s also a subtle prestige in saying, “I’m working on something that could save humanity”. But it’s lobbying disguised as science.
ZR: Are we now seeing a two-speed ethical system – stringent oversight in some countries, less in others?
SWH: To some extent, yes. But it’s not just about east v west. Even in countries with oversight structures, like the UK or US, enforcement is often weak. Bodies like institutional biosafety committees sometimes judge their colleagues’ work, that of people they know, people whose labs rely on funding. That’s a massive conflict of interest. We also have the Biological and Toxin Weapons Convention, which should play a role, but is essentially gridlocked. Some countries aren’t particularly inclined to follow western bioethics frameworks, so you end up with a fragmented, inconsistent system. Some people call it dual standards. I call it dangerous.
ZR: Do you think there’s any global framework being developed to assess GoF risks versus benefits? Is there growing momentum for safer approaches?
SWH: At the National Institutes of Health (NIH), yes, internationally, no. There’s no consistent mechanism across borders. That’s why I say the burden falls on a handful of countries to take leadership. If the US and European powers adopt strong, enforceable standards, others may eventually follow, but right now, the vacuum is being filled by institutional convenience and scientific ambition. There’s more talk about safer approaches, but that doesn’t mean more action. We still have labs tinkering with pathogens in BSL-3 facilities, which are not fail-safe. Look at Covid, after five years, we still haven’t definitively ruled out a lab origin. That’s not a good sign. Then there’s this flawed belief that lab models can mimic what happens innature. They can’t. You may show that a cat virus infects human cells in a dish, but that doesn’t imply anything meaningful for public health policy. It’s neither predictive nor preventive. It’s a scientific cul-de-sac.
ZR: What about Trump’s recent US executive order – will that help?
SWH: Actually, that wasn’t driven by Trump himself, but by the new NIH director, who saw the danger and wanted real change. The key is that it’s intended to become law, which is significant. Under Obama, we had a moratorium, but it was reversed with a stroke of the pen. Law, on the other hand, requires a congressional vote to repeal. Unfortunately, because it’s associated with the Trump administration, some progressives might try to reverse it for political reasons. That would be a mistake. Pathogens don’t care who’s in office.
ZR: Are scientists themselves comfortable with oversight?
SWH: Not really. Many find it threatening. There’s this ingrained belief in academic freedom, “don’t tell me what I can or can’t research”. But dangerous GoF isn’t just an academic issue, it’s a public safety matter. We regulate nuclear material. We regulate toxins. Why not this? I’ve also found that scientists dislike transparency when it works against them. They love a good press release to announce a breakthrough, but if you ask about safety lapses or lab leaks, suddenly it’s, “no comment – speak to our media team”. That’s not how trust is built.
ZR: So how do we rebuild trust between scientists and the public?
SWH: Very slowly, and with much humility. We need more scientists who communicate like Isaac Asimov or Richard Feynman, people who can explain not just what they’re doing, but why it matters. Too often, we talk down to people or drown them in jargon. That’s not transparency, that’s evasion. We also need scientists to admit uncertainty and error. Covid proved that information evolves, but too many experts got entrenched. They stopped listening and made pronouncements instead of holding conversations. That created a void, which is where conspiracy theories and misinformation rushed in.
ZR: Do you think the public understands the difference between normal GoF and dangerous GoF?
SWH: If you explain it clearly, yes. GoF 1.0, which is about genetic tweaks to help understand viruses or build vaccines, is fine. That’s science. But GoF 2.0, where you heat up a virus or invent a new one to see what happens is not fine. Tell the public, “we’re not going to make new pathogens. We’re not going to soup up the ones we already have.” They’ll say, “good.” It’s that simple.
ZR: What do you say to the argument that stopping this kind of research would set science back?
SWH: That’s a scare tactic. Most scientific progress doesn’t come from manufacturing high-risk pathogens. It results from basic research, good data and clear thinking. We didn’t decode the HIV genome by making it more infectious. It was done through methodical sequencing and analysis. Saying we need to build a superbug to stop a superbug is like saying we need to start a fire to understand arson. It’s flawed logic, based on ego more than evidence.
ZR: What’s the psychological drive behind this kind of research? Why make pathogens more dangerous?
SWH: Partly curiosity, partly ego. There’s something about being the one who understands the next big threat that appeals to people. Then there’s the adrenaline of working on something risky. Scientists are human, they get excited. But without guardrails that excitement becomes recklessness. There’s also a cultural issue. Scientists don’t like being questioned, but if they’re working on something that could affect millions, they must expect scrutiny. That’s not censorship, it’s accountability.ZR: Is there any instance where GoF 2.0 has helped us prepare for a pandemic?
SWH: None. Zero. It’s a pipe dream that’s been peddled for 30 years. Scientists keep pushing it because it’s easy to fund, easy to publish and sounds dramatic. You say your work might prevent the next pandemic, and boom funding approved. But it’s a dangerous illusion. Labs need funding to stay alive, and this is an effective sales pitch, but that doesn’t make it real.
ZR: Have you and your colleagues at Biosafety Now faced pushback for being outspoken on this topic?
SWH: Sure. Some of it’s subtle, like being ignored or not invited to panels. Some is direct, but we’re used to it. We don’t do this for popularity, it’s because the stakes are too high to stay silent. We highlight things they’d prefer stayed in the shadows. Scientists love press coverage when it flatters them, but when you ask uncomfortable questions- about lab leaks, or accountability, you get stonewalled. When people call you ‘alarmist’, it usually means you’re touching a nerve.
ZR: What kind of leadership do we need to fix this?
SWH: We need people at the top of agencies like the NIH or the Wellcome Trust to say, “enough. This is the line. Cross it, and you’re out.” We also need leaders to speak plainly. Don’t sugar-coat. Don’t spin. Just say, “we’re prioritising safety. We’re prioritising the public.” That kind of leadership creates clarity, and clarity builds trust. We’ve had too much hedging, too many half measures. It’s time for a decisive, global standard.
ZR: If there’s one takeaway for readers, what would you want it to be?
SWH: Dangerous GoF isn’t a hypothetical. It’s happening now, and if oversight fails, the next pandemic may be human made, not natural. That’s not science fiction. It’s a real, avoidable threat. This isn’t about fear mongering. It’s about responsibility. We can debate the benefits all day, but if the cost is global catastrophe, then we need to rethink our approach. It’s not enough to ask, “can we do this?” We need to ask, “should we?”.
Source: Bài báo CBRNe world tháng 6 trên www.cbrneworld.com
Author: Gwyn Winfield
Translator: Vũ Quang Đại.