OpenAI’s next artificial intelligence models might carry a higher risk of being used to help create biological weapons. The company acknowledges that if an AI model is powerful enough to make beneficial scientific or medical discoveries, it’s also capable of being used for harm.
“The same underlying capabilities driving progress, such as reasoning over biological data, predicting chemical reactions, or guiding lab experiments, could also potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons,” OpenAI writes in a new blog post.
OpenAI expects its upcoming AI models to reach the “high” level of capability in biology as measured by its own Preparedness Framework. This means that it could provide “meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats.”
It seems notable that OpenAI’s post about bioweapons was published one day after the company accepted a $200 million contract with the US Department of Defense.
How OpenAI is working to mitigate these risks
Nevertheless, OpenAI will not release an AI model until it is satisfied that the risks have been mitigated. As such, it is working with biosecurity experts, academic researchers, red teamers who know how to test for AI vulnerabilities, and government entities including the US Center for AI Standards and Innovation and UK AI Security Institute to shape and execute the testing process and prepare mitigations.
Such mitigations include:
- Training the models not to respond, or to respond safely, to harmful or dangerous prompts.
- Detect-and-block systems to flag suspicious bio-related activity.
- Using AI systems and human reviewers to enforce usage policies, suspend violating accounts, and involve law enforcement when appropriate.
- Security controls, such as access control and infrastructure hardening.
It’s not just OpenAI’s models that are increasing the biosecurity risks
This topic of AI models and bioweapons isn’t a new concern. For instance, in May 2024, 26 nations agreed to collaborate on the development of risk thresholds for AI systems that could create biological weapons.
In February 2025, this risk was highlighted in the UK’s International AI Safety Report.
In April 2025, researchers tested leading AI models against the Virology Capabilities Test, a benchmark for expert-level knowledge in virology and lab protocols. They found that models like OpenAI’s GPT-4o outperformed most human virologists, raising concerns about bioweapon risks.
In May 2025, Anthropic confirmed it put security measures in place for Claude Opus 4 to prevent it from being used to build biological and nuclear weapons, given the model’s higher risk. Still, just a month before it was released, Anthropic’s CEO admitted that until we truly understand how AI works, which we don’t, we won’t be able to confidently design systems that prevent harmful behavior like this.
This week, a group of AI experts published a report stating that evidence linking AI models to biological weapons risks has grown notably since March, as developers have reported “capability jumps” in these areas.
Legislation is important to lower this AI risk, but hard to pass
AI experts are largely in agreement that a strong, coordinated response is essential to protect the public from the risks of people using AI to develop biological weapons. In February, former Google CEO Eric Schmidt warned that rogue states like North Korea, Iran, and Russia could seize this capability if such a response is not in place.
The following month, Anthropic sent a letter to the White House Office of Science and Technology Policy urging immediate action on AI security, after its own testing revealed alarming improvements in Claude 3.7 Sonnet’s ability to assist with aspects of bioweapons development.
Unfortunately, passing AI legislation has been difficult thus far, largely because of disagreements between safety advocates and tech companies, as well as some politicians, who worry that guardrails could hinder innovation and limit the economic benefits it brings.
The California AI regulation bill SB-1047, which aimed to prevent AI models from causing large-scale damage to humanity, such as through bioweapons, would have been the strongest regulation in the US regarding generative AI. However, this bill was vetoed in September 2024 by Governor Gavin Newsom, who thought it was too rigid.
Now, a Republican-backed budget bill is making its way through Congress that would bar US states and localities from regulating AI for the next decade, with its sponsors arguing that a unified national framework would be better than a fragmented patchwork of state laws. Critics note, however, that eliminating state compliance measures would leave bad tech unaccountable, serving to attract companies that want to avoid regulatory delays.
Read more about why experts say AI legislation is falling behind the pace of technological advances in our analysis of Governor Newsom’s veto of California’s AI bill.