
Submitted by S.A. Norwood on Tue, 15/04/2025 - 14:15
As with many emerging technologies, Engineering Biology included, rapid developments in Artificial Intelligence have provoked a great deal of discussion around the potential for harm, and conversations about how best to manage this. The dual-use nature of biotechnologies has been discussed before,1 but the ‘uplift’ potential of AI has raised significant additional concerns. In fact, misuse at the convergence of AI with biotechnology has frequently and prominently been used as an example of one of the most evocative examples of potential harm.2 This has been recognised in several fora, with scholars from diverse disciplines and backgrounds seeking to evaluate and manage this risk. This includes governments, who have drawn attention to the risk potential in speeches, strategy documents and who have, in some cases, invested significant capital and time in advancing knowledge and governance options in the area.3 However, there is considerable debate about the nature and scale of the risks presented: What is it that has provoked such concern, and are we prepared?
Lowering the barriers to biological weapons?
First of all, it’s necessary to clarify what kind of AI it is we’re talking about, not just for the sake of precision, but to move away from the tendency to imbue AI with near-magical qualities.”Dr Lalitha Sundaram
The most worrying prospect would be for a mal-intentioned actor to use AI to aid in the development of biological weapons.4 While this kind of scenario obviously captures the imagination (terrifyingly!), it needs careful unpacking across a number of dimensions, technical and otherwise.
First of all, it’s necessary to clarify what kind of AI it is we’re talking about, not just for the sake of precision, but to move away from the tendency to imbue AI with near-magical qualities. Even for a single type of tool, assessments can vary. On the purely technical level, assessments of Large Language Models for example, have ranged from significantly raising the alarm5 to being somewhat similar to a baseline of un-augmented internet searching.6 Aside from LLMs, there has also been a proliferation of Narrow Biodesign Tools with relevance for Engineering Biology.7 Some are more capable than others, and they each have strengths and limitations - both in terms of utility to Engineering Biology but also utility to a would-be bioweapons developer.
Conversely, some stages of biological weapons development may be more amenable to “barrier lowering” through AI than others. As previous attempts by States and Non-States to develop biological weapons have shown us, the process is a complex one, and the utility of AI in the different stages will depend on the stage, the actor(s) involved, their existing capabilities, and their absorptive capacity. Moreover, while many assessments have looked in particular at the ‘design’ stage (and this is where AI could likely have an impact), you cannot get away from the need for iterative testing ‘in the real world’. And the transition to the physical world is a significant pinch point.
Though it has been written about for decades,8 the importance of tacit knowledge in biological weapons production has often been overlooked. Having a jail-broken LLM spit out instructions is unlikely to be sufficient to create a usable biological weapon. How might new and more interactive modes of instruction impact this, though? There is a physicality to life science practice that is hard to automate. But we are trying, and for very good reasons of standardisation, reproducibility, efficiency and safety. So, thinking beyond the crude picture of a chatbot giving a list of instructions, we need to think about the developments in AI-assisted training across the digital-physical barrier, as well as developments in EngBio to do with standardisation and automation, and how they might erode traditional tacit knowledge barriers.9
What does the research say?
All this said, the impact that AI has and will continue to have on the biosciences is undeniable, and the various facets to this need teasing out. Various organisations are doing so, applying their often interdisciplinary lenses to the issue.
Given the often amorphous and confusing nature of what is actually meant by ‘AIxBio risk’, researchers at the Harvard-Sussex Program have sought to contextualise this space, by turning what it terms “AI-Anxieties” into a useable framework for guiding and directing inquiry. They have developed useful classifications in terms of both capabilities (what can AI do?) and challenge (how might AI pose a risk?).10
On the more technical side, a recent RAND report used structured red-teaming to explore how LLMs might enable actors without deep expertise to bridge crucial knowledge gaps. The exercises demonstrated that LLMs did not explicitly generate detailed weaponisation instructions, but that they did provide guidance and context in critical areas such as agent selection, delivery methods, and operational planning. These findings indicate that although AI-driven risks might not be transformative, they could erode traditional barriers of tacit knowledge and absorptive capacity, especially as biotechnology becomes increasingly automated and standardised.6 More recent work at RAND and the Centre for Long-Term Resilience is seeking to develop a ‘risk register’ to monitor AI-powered biological tools.11
How could AI uplift design and development capabilities for molecular bioweapons? Figure adapted from NASEM "The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations" report.7
A March 2025 report by the National Academies of Sciences, Engineering, and Medicine (NASEM), commissioned by the US Department of Defense under President Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, delves into three specific areas: the design of biomolecules (including toxins); the modification of known pathogens to increase the potential for harm; and de novo virus design. Their view of the technology landscape was that the first scenario was possible using current or near-term AI-enabled tools; overall, AI augmentation was thought to be most useful in design. The modification or creation of pathogens, however, was deemed beyond the capability of current tools, and even the datasets to train such models were not known to exist. What the NASEM report also highlights, is that, however advanced, “AI-enabled biological tools do not necessarily reduce the bottlenecks and barriers to crossing the digital–physical divide…”.7
Broader context: the conventions
The main international agreement dealing with the weaponisation of the life sciences is the Biological and Toxin Weapons Convention. However, given the many applications of Engineering Biology in the production of chemicals, the Chemical Weapons Convention also applies, especially when we consider the place of toxins. In fact, one of the earliest papers that ‘raised the alarm’ about the incorporation of AI tools into biotechnology was about the potential for misuse and development of potential chemical weapons.12 While there is debate about the real-life utility of such models (plenty of toxins are already well-known, and the tool did not address issues such as stability), the fact that it successfully generated these predictions does rightly draw attention to the capabilities afforded by this kind of technology convergence.
Do the BTWC and CWC “apply” to the convergence of AI and the life sciences? Both are clear: they are about purpose not about specific agents or technologies (this is often referred to as the ‘general purpose criterion’). Thus, the use or not of AI is immaterial as to whether the Conventions “apply”: no matter what the means, what is proscribed in the BTWC is whether a biological weapon has been “develop[ed], produce[d], stockpile[d] or otherwise acquire[d] or retain[ed]”.13 However, the need for the Conventions to incorporate processes and mechanisms to keep pace with emerging technologies has been widely-recognised. The CWC has a Technical Secretariat that is examining the impact of AI,14 and the Working Group of the BTWC is considered a science and technology review mechanism.15
Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction
On the flip side, AI tools may well assist in implementation of the Conventions in surveillance, detection, verification, or attribution. Not only that, they may hold promise for other aspects of dealing with biological risk, such as designing decontamination tools or, in ways that intersect with developments in health and infectious disease risk management, such as the development of medical countermeasures.
Open questions
So far we have considered primarily the technical aspects of whether AI could in fact lower barriers to deliberately causing harm, but this all needs to be overlaid onto a better understanding of motivations and incentives: even if all the tools were available, who might want to cause this kind of harm, and why? While there has been work to try and understand this, especially in the context of past state-backed weapons programmes,16 we also should recognise that AI might also modulate that. We know that AI can be–and is already being–used to shape narratives,17 especially when it comes to health and health security. There is a chance that this shaping might be used to influence narratives around the perceived utility or necessity of biological weapons, potentially eroding the norm against them.
Learn more
At CSER, we have organised a set of seminars intending to create a shared understanding of the realistic risks and appropriate safeguards to address the rapid evolution of AI capabilities as applied to the life sciences. Our seminars have explored technical questions, national perspectives and evaluations, and options for international governance.
Sign up to the CSER newsletter to hear more about this and our other work on biosecurity |
Join our community mailing list for more regular updates on seminars and publications |
References and Links
- Biosecurity, Biosafety, and Dual Use: Will Humanity Minimise Potential Harms in the Age of Biotechnology? The Era of Global Risk. NATO Science for Peace and Security Programme (2023)
- An Overview of Catastrophic AI Risks. Centre for AI Safety (2023); Catastrophic AI Scenarios. Future of Life Institute (2024)
- Sens. Markey, Budd Announce Legislation to Assess Health Security Risks of AI. Senator Ed Markey (2023); Advanced AI evaluations at AISI: May update. AI Security Institute (2024); UK Biological Security Strategy. UK Government (2023)
- Ex-Google boss fears for AI 'Bin Laden scenario'. BBC News (2025)
- Can large language models democratize access to dual-use biotechnology? arXiv (2023)
- The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study. RAND Corporation (2024)
- The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations. National Academies of Sciences, Engineering, and Medicine (2025)
- Barriers to bioweapons: The challenges of expertise and organization for weapons development. Cornell University Press (2014)
- Tacit knowledge and the biological weapons regime. Science and Public Policy (2013)
- Impacts of Artificial Intelligence on CBW Prohibition. The Harvard Sussex Programme (2024)
- A new risk index to monitor AI-powered biological tools. RAND Corporation (2025)
- Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence (2022)
- Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction. United Nations General Assembly (1971)
- Joint Press Release by the OPCW and Morocco’s National Authority for the CWC on Global Conference on AI in CWC Implementation. Organisation for the Prohibition of Chemical Weapons (2024)
- A return to scientific and technological developments: setting the scene. Bioweapons Prevention Project (2024)
- Understanding the Threat of Biological Weapons in a World with COVID-19. The Nolan Centre (2022)
- Countering WMD Disinformation. Global Partnership Against the Spread of Weapons and Materials of Mass Destruction (2024); The Role of AI in The Information War: From Analysis to Shaping Narratives. Dr Dren Gerguri (2024)