Categories: World

Potential California AI law divides Silicon Valley

Spread the love

Photo: Paul Sancya Associated Press The California Assembly is expected to vote on the proposal before the end of the month, and if it passes, it will be Gov. Gavin Newsom's turn to decide.

Glenn Chapman – Agence France-Presse and Julie Jammot – Agence France-Presse in San Francisco

Published at 16:23

  • United States

A bill to regulate powerful generative artificial intelligence (AI) models is moving forward in California, despite opposition from businesses and elected officials who fear such regulation would stifle the nascent technology.

Leading the way, OpenAI, the creator of ChatGPT, came out against SB 1047, saying it risked driving away innovators from the US state and its famous Silicon Valley at a time when “the AI ​​revolution is only just beginning.”

In a letter this week to the Democratic representative sponsoring the bill, Scott Wiener, OpenAI added that it believes national legislation is preferable to a patchwork of regulations.

The California Assembly is expected to vote on the bill before the end of the month, and if it passes, it will be Governor Gavin Newsom’s turn to vote.

He has not yet taken a public position, but the Democratic camp is not united on this question.

“Many of us in Congress believe that SB 1047 is well-intentioned but misinformed,” Nancy Pelosi, one of the party’s most influential voices, said in a statement.

“We want California to be at the forefront of AI in a way that protects consumers, data, intellectual property and more. […] SB 1047 does more harm than good to that end,” the congresswoman added.

“Foreseeable Risks”

Dubbed the “Safe Innovation in Pioneering AI Models Act,” the bill aims to prevent large models from causing major disasters, resulting in mass deaths or significant cybersecurity incidents.

Scott Wiener has nonetheless softened the original text, notably following advice from OpenAI competitor Anthropic, a startup also based in San Francisco.

200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000

The current version gives California authorities less power than initially intended to hold AI companies accountable or to prosecute them.

Developers of large AI models will have to test their systems and simulate cyberattacks, under penalty of fines, but without the threat of criminal consequences.

The creation of a new regulatory agency was canceled, but the law would still establish a council to set standards for the most advanced models.

“With Congress deadlocked on regulating “In the face of AI, California must act to anticipate the foreseeable risks presented by rapid advances in AI while encouraging innovation,” Wiener said in a statement.

Generative AI currently produces high-quality content (text, images, etc.) based on a simple query in everyday language. But it has the potential, according to the engineers involved, to go much further, and thus to solve important problems — but also to cause them.

“Unrealistic”

The amended bill is “significantly improved, to the point that we believe its benefits likely outweigh its costs,” Anthropic said in a letter to Gavin Newsom on Wednesday.

“Powerful AI systems hold incredible promise, but they also present very real risks that must be taken very seriously,” computer scientist Geoffrey Hinton, considered the “godfather of AI,” said in an op-ed in Fortune magazine.

“SB 1047 represents a very reasonable approach to balancing these concerns,” he continued, arguing that California is the ideal place to start regulating the technology.

But organizations representing Google and Meta (Facebook, Instagram), leading researchers such as Fei-Fei Li of Stanford University, and professors and students at CalTech University have voiced opposition to the bill.

It “imposes burdensome and unrealistic regulations on the development of “AI” and therefore poses “a significant threat to our ability to advance research,” CalTech professor Anima Anandkumar said on X (ex-Twitter).

She believes that it is difficult, if not impossible, to foresee all the possible harmful uses of an AI model, particularly so-called “open source” versions, whose code can be modified by users.

“The bill neglects the importance of focusing on applications developed downstream and does not take into account current research on issues inherent to AI models such as bias and hallucinations.”

Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116

Recent Posts

LIGUE 1. PSG – Brest: Barcola in the spotlight

Before his PSG match against Brest, young Bradley Barcola attracts praise from the media who…

5 days ago

LIGUE 1. PSG – Brest: Barcola in the spotlight

Before his PSG match against Brest, young Bradley Barcola attracts praise from the media who…

5 days ago

Welcome to Derry (Max): Why Stephen King's Universe Will Be Featured in the Series ?

© Warner Bros After two particularly successful feature films, Stephen King's It Saga will be…

6 days ago

Where are electric cars made ?

© Renault It’s always interesting to know where products that we can use on a…

6 days ago

Fire in the Pyrénées-Orientales: the fire is fixed, but there is a significant risk of it starting again

The fire that broke out in the massif of Aspres Thursday, is now fixed. On…

6 days ago

Pélicot case: the video that revealed the Mazan rapes revealed

À the origin of the The Mazan rape case, Dominique Pélicot had first attracted attention…

6 days ago