Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Subscribe
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Subscribe
    Home Latest News

      California Needs AI Law to Protect from ‘Irreversible Harms,’ Experts Say

      Written by

      Fiona Jackson
      Published June 19, 2025
      Share
      Facebook
      Twitter
      Linkedin

        eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

        California hosts the headquarters of many tech giants and could potentially set a precedent for AI regulation.

        The California Report on Frontier AI Policy report published on Tuesday outlines new plans for AI guardrails that could be implemented as state law. It was drawn up by experts after California Governor Gavin Newsom vetoed the controversial AI regulation bill SB 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act —  in September 2024.

        Since March, when an initial draft of the report was published, there has been more evidence that AI models contribute to “chemical, biological, radiological, and nuclear (CBRN) weapons risks,” with developers reporting “capability jumps” in these areas, the authors wrote. They said AI could cause “irreversible harm” if appropriate safeguards are not implemented.

        The authors also emphasised that regulations must be “commensurate with the associated risk” and “not so burdensome as to curb innovation.”

        Report’s authors disparage the ‘big, beautiful bill’ that would ban state-level AI laws

        The report comes as a Republican-backed budget bill makes its way through Congress that would bar US states and localities from regulating AI for the next 10 years. Its sponsors argue that a unified national framework would be better than a fragmented patchwork of state laws. Critics note, however, that eliminating state compliance measures designed to prevent bias and protect AI consumers would also attract financially lucrative tech companies that want to avoid regulatory delays.

        So far, California has largely refrained from implementing broad or specific AI regulations, opting instead for voluntary agreements.

        “Carefully targeted policy in California can both recognize the importance of aligning standards across jurisdictions to reduce compliance burdens on developers and avoid a patchwork approach while fulfilling states’ fundamental obligation to their citizens to keep them safe,” the authors wrote. 

        Third-party risk evaluations are key to regulating AI models, the researchers say

        The crux of Newsom’s problem with SB 1047 was that it simply targeted all large models, regardless of risk profile, instead of focusing on high-risk models of all sizes. The report’s authors agreed, saying that other factors must be taken into account when categorising models for regulation besides size, such as “evaluations of risk” and their “downstream footprint.”

        Mandating third-party risk evaluations is especially important because AI developers do not always voluntarily share information about “data acquisition, safety and security practices, pre-deployment testing, and downstream impacts.” Many operate as black boxes. The CEO of Anthropic has even admitted his company doesn’t fully understand how its AI system works, raising doubts about whether tech executives can accurately assess the risks involved. 

        External evaluations, often conducted by groups with greater demographic diversity than California’s tech companies, allow for broader representation of communities most vulnerable to AI bias, the authors noted. Making such audits mandatory could also increase transparency across the board; companies will be incentivised to improve their safety and avoid scrutiny, which would also work to decrease their liability exposure. 

        Nevertheless, carrying out such evaluations would require access to company data that developers may be reluctant to share. In its 2024 evaluation, METR, an independent AI safety research firm, said it was only provided “limited access” to information about OpenAI’s o3 model, making it difficult to fully interpret the results and assess potential safety risks.

        How the report links to SB 1047

        After vetoing SB 1047 to appease his pro-safety critics, Newsom outlined several new initiatives related to generative AI. One of these was that the state would convene a group of AI experts and academics, including Stanford University professor and AI “grandmother” Fei-Fei Li, to “help California develop workable guardrails.” The plan for these guardrails is outlined in the report. 

        SB 1047 would have been the strongest regulation in the country regarding generative AI. It aimed to prevent AI models from causing large-scale damage to humanity, such as through nuclear war or bioweapons, or financial losses by placing strict safety requirements on developers. 

        These requirements offered protections for industry whistleblowers, mandated that large AI developers be able to fully shut down their models, and held major AI companies accountable for strict safety and security protocols. The report says that the whistleblower protections should remain, but does not mention a kill switch.

        Much of Silicon Valley — including OpenAI, Meta, Google, and Hugging Face — publicly disparaged SB 1047, calling it a threat to innovation and criticising its “technically infeasible requirements.” The bill did gain support from Elon Musk at the time, while two former OpenAI employees also wrote a letter to Newsom criticising their former employer for opposing SB 1047.

        Editor’s note: This content was originally published on our sister site TechRepublic on June 18, 2025.

        Fiona Jackson
        Fiona Jackson
        Fiona Jackson is a news writer who started her journalism career at SWNS press agency, later working at MailOnline, an advertising agency, and TechnologyAdvice. Her work spans human interest and consumer tech reporting, appearing in prominent media outlets such as TechHQ, The Independent, Daily Mail, and The Sun.

        Get the Free Newsletter!

        Subscribe to Daily Tech Insider for top news, trends & analysis

        Get the Free Newsletter!

        Subscribe to Daily Tech Insider for top news, trends & analysis

        MOST POPULAR ARTICLES

        Artificial Intelligence

        9 Best AI 3D Generators You Need...

        Sam Rinko - June 25, 2024 0
        AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
        Read more
        Cloud

        RingCentral Expands Its Collaboration Platform

        Zeus Kerravala - November 22, 2023 0
        RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
        Read more
        Artificial Intelligence

        8 Best AI Data Analytics Software &...

        Aminu Abdullahi - January 18, 2024 0
        Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
        Read more
        Latest News

        Zeus Kerravala on Networking: Multicloud, 5G, and...

        James Maguire - December 16, 2022 0
        I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
        Read more
        Video

        Datadog President Amit Agarwal on Trends in...

        James Maguire - November 11, 2022 0
        I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
        Read more
        Logo

        eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

        Facebook
        Linkedin
        RSS
        Twitter
        Youtube

        Advertisers

        Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

        Advertise with Us

        Menu

        • About eWeek
        • Subscribe to our Newsletter
        • Latest News

        Our Brands

        • Privacy Policy
        • Terms
        • About
        • Contact
        • Advertise
        • Sitemap
        • California – Do Not Sell My Information

        Property of TechnologyAdvice.
        © 2024 TechnologyAdvice. All Rights Reserved

        Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.