Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Subscribe
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Subscribe
    Home Latest News

      OpenAI Warns Its Next AI Models May Help Bad Actors Create Bioweapons

      Written by

      Fiona Jackson
      Published June 20, 2025
      Share
      Facebook
      Twitter
      Linkedin

        eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

        OpenAI’s next artificial intelligence models might carry a higher risk of being used to help create biological weapons. The company acknowledges that if an AI model is powerful enough to make beneficial scientific or medical discoveries, it’s also capable of being used for harm.

        “The same underlying capabilities driving progress, such as reasoning over biological data, predicting chemical reactions, or guiding lab experiments, could also potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons,” OpenAI writes in a new blog post.

        OpenAI expects its upcoming AI models to reach the “high” level of capability in biology as measured by its own Preparedness Framework. This means that it could provide “meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats.”

        It seems notable that OpenAI’s post about bioweapons was published one day after the company accepted a $200 million contract with the US Department of Defense.

        How OpenAI is working to mitigate these risks

        Nevertheless, OpenAI will not release an AI model until it is satisfied that the risks have been mitigated. As such, it is working with biosecurity experts, academic researchers, red teamers who know how to test for AI vulnerabilities, and government entities including the US Center for AI Standards and Innovation and UK AI Security Institute to shape and execute the testing process and prepare mitigations.

        Such mitigations include:

        • Training the models not to respond, or to respond safely, to harmful or dangerous prompts. 
        • Detect-and-block systems to flag suspicious bio-related activity. 
        • Using AI systems and human reviewers to enforce usage policies, suspend violating accounts, and involve law enforcement when appropriate. 
        • Security controls, such as access control and infrastructure hardening.

        It’s not just OpenAI’s models that are increasing the biosecurity risks

        This topic of AI models and bioweapons isn’t a new concern. For instance, in May 2024, 26 nations agreed to collaborate on the development of risk thresholds for AI systems that could create biological weapons. 

        In February 2025, this risk was highlighted in the UK’s International AI Safety Report.

        In April 2025, researchers tested leading AI models against the Virology Capabilities Test, a benchmark for expert-level knowledge in virology and lab protocols. They found that models like OpenAI’s GPT-4o outperformed most human virologists, raising concerns about bioweapon risks.

        In May 2025, Anthropic confirmed it put security measures in place for Claude Opus 4 to prevent it from being used to build biological and nuclear weapons, given the model’s higher risk. Still, just a month before it was released, Anthropic’s CEO admitted that until we truly understand how AI works, which we don’t, we won’t be able to confidently design systems that prevent harmful behavior like this.

        This week, a group of AI experts published a report stating that evidence linking AI models to biological weapons risks has grown notably since March, as developers have reported “capability jumps” in these areas. 

        Legislation is important to lower this AI risk, but hard to pass

        AI experts are largely in agreement that a strong, coordinated response is essential to protect the public from the risks of people using AI to develop biological weapons. In February, former Google CEO Eric Schmidt warned that rogue states like North Korea, Iran, and Russia could seize this capability if such a response is not in place. 

        The following month, Anthropic sent a letter to the White House Office of Science and Technology Policy urging immediate action on AI security, after its own testing revealed alarming improvements in Claude 3.7 Sonnet’s ability to assist with aspects of bioweapons development.

        Unfortunately, passing AI legislation has been difficult thus far, largely because of disagreements between safety advocates and tech companies, as well as some politicians, who worry that guardrails could hinder innovation and limit the economic benefits it brings. 

        The California AI regulation bill SB-1047, which aimed to prevent AI models from causing large-scale damage to humanity, such as through bioweapons, would have been the strongest regulation in the US regarding generative AI. However, this bill was vetoed in September 2024 by Governor Gavin Newsom, who thought it was too rigid.

        Now, a Republican-backed budget bill is making its way through Congress that would bar US states and localities from regulating AI for the next decade, with its sponsors arguing that a unified national framework would be better than a fragmented patchwork of state laws. Critics note, however, that eliminating state compliance measures would leave bad tech unaccountable, serving to attract companies that want to avoid regulatory delays.

        Read more about why experts say AI legislation is falling behind the pace of technological advances in our analysis of Governor Newsom’s veto of California’s AI bill.

        Fiona Jackson
        Fiona Jackson
        Fiona Jackson is a news writer who started her journalism career at SWNS press agency, later working at MailOnline, an advertising agency, and TechnologyAdvice. Her work spans human interest and consumer tech reporting, appearing in prominent media outlets such as TechHQ, The Independent, Daily Mail, and The Sun.

        Get the Free Newsletter!

        Subscribe to Daily Tech Insider for top news, trends & analysis

        Get the Free Newsletter!

        Subscribe to Daily Tech Insider for top news, trends & analysis

        MOST POPULAR ARTICLES

        Artificial Intelligence

        9 Best AI 3D Generators You Need...

        Sam Rinko - June 25, 2024 0
        AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
        Read more
        Cloud

        RingCentral Expands Its Collaboration Platform

        Zeus Kerravala - November 22, 2023 0
        RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
        Read more
        Artificial Intelligence

        8 Best AI Data Analytics Software &...

        Aminu Abdullahi - January 18, 2024 0
        Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
        Read more
        Latest News

        Zeus Kerravala on Networking: Multicloud, 5G, and...

        James Maguire - December 16, 2022 0
        I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
        Read more
        Video

        Datadog President Amit Agarwal on Trends in...

        James Maguire - November 11, 2022 0
        I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
        Read more
        Logo

        eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

        Facebook
        Linkedin
        RSS
        Twitter
        Youtube

        Advertisers

        Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

        Advertise with Us

        Menu

        • About eWeek
        • Subscribe to our Newsletter
        • Latest News

        Our Brands

        • Privacy Policy
        • Terms
        • About
        • Contact
        • Advertise
        • Sitemap
        • California – Do Not Sell My Information

        Property of TechnologyAdvice.
        © 2024 TechnologyAdvice. All Rights Reserved

        Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.