Author: Tom Mitchelhill, CoinTelegraph; Translation: Taozhu, Golden Finance
California Governor Gavin · Newsom has vetoed a controversial artificial intelligence (AI) bill, arguing that it would hinder innovation and fail to protect the public from the “real” threat posed by technology.
**On September 30, Newsom vetoed SB 1047, the Frontier AI Model Security Innovation Act, which was strongly opposed by Silicon Valley. **
The** proposal**** suggests mandatory security testing for artificial intelligence models and other guardrails, which tech companies fear will stifle innovation.
In a statement on September 29, Newsum said that the bill focuses too much on regulating existing top artificial intelligence companies and does not protect the public from the “real” threats posed by new technologies.
“On the contrary, the bill adopts strict standards for the most basic functions - as long as they are deployed by large systems. I don’t think this is the best way to protect the public from the real threats posed by technology.”
SB 1047 was drafted by Scott Wiener, a Democratic senator from San Francisco, and also requires California developers, including big names such as OpenAI, Meta, and Google, who manufacture ChatGPT, to implement a ‘kill switch’ for their AI models and release plans to mitigate extreme risks.
If the bill is implemented, AI developers may also be prosecuted by state attorneys general in the face of continued threats such as AI grid takeover models.
Newton said he had asked the world’s leading AI security experts to help California ‘develop feasible guardrails’, with a focus on creating ‘science-based trajectory analysis’. He added that he had ordered state government agencies to expand their assessment of the catastrophic risks that AI development could pose.
Although Newton vetoed SB 1047, he said that sufficient secure protocols must be established for artificial intelligence, and added that regulatory agencies should not “wait until a major disaster occurs before taking action to protect the public”.
Newton added that his government has signed more than 18 bills related to artificial intelligence regulation in the past 30 days.
Politicians and large tech companies oppose the Artificial Intelligence Security Act
Before Newton made the decision, the bill was not popular among legislators, advisers, and major tech companies.
House Speaker Nancy Pelosi and companies like OpenAI have said that this will seriously hinder the development of artificial intelligence.
Neil Chilson, head of AI policy at Abundance Institute, warned that while the bill is primarily aimed at models with significant cost and scale (models costing over 100 million US dollars), its scope could easily expand to target smaller developers.
But some people are open to the bill. Billionaire Elon Musk is developing his own artificial intelligence model ‘Grok’ and is one of the few tech leaders who support the bill and the more widespread development of AI regulations.
In a post sent to X on August 26th, Musk said, ‘California should pass the SB 1047 Artificial Intelligence Security Act,’ but he admitted that supporting the bill was a ‘difficult decision’.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
California Governor Vetoes Controversial Artificial Intelligence Safety Bill
Author: Tom Mitchelhill, CoinTelegraph; Translation: Taozhu, Golden Finance
California Governor Gavin · Newsom has vetoed a controversial artificial intelligence (AI) bill, arguing that it would hinder innovation and fail to protect the public from the “real” threat posed by technology.
**On September 30, Newsom vetoed SB 1047, the Frontier AI Model Security Innovation Act, which was strongly opposed by Silicon Valley. **
The** proposal**** suggests mandatory security testing for artificial intelligence models and other guardrails, which tech companies fear will stifle innovation.
In a statement on September 29, Newsum said that the bill focuses too much on regulating existing top artificial intelligence companies and does not protect the public from the “real” threats posed by new technologies.
“On the contrary, the bill adopts strict standards for the most basic functions - as long as they are deployed by large systems. I don’t think this is the best way to protect the public from the real threats posed by technology.”
SB 1047 was drafted by Scott Wiener, a Democratic senator from San Francisco, and also requires California developers, including big names such as OpenAI, Meta, and Google, who manufacture ChatGPT, to implement a ‘kill switch’ for their AI models and release plans to mitigate extreme risks.
If the bill is implemented, AI developers may also be prosecuted by state attorneys general in the face of continued threats such as AI grid takeover models.
Newton said he had asked the world’s leading AI security experts to help California ‘develop feasible guardrails’, with a focus on creating ‘science-based trajectory analysis’. He added that he had ordered state government agencies to expand their assessment of the catastrophic risks that AI development could pose.
Although Newton vetoed SB 1047, he said that sufficient secure protocols must be established for artificial intelligence, and added that regulatory agencies should not “wait until a major disaster occurs before taking action to protect the public”.
Newton added that his government has signed more than 18 bills related to artificial intelligence regulation in the past 30 days.
Politicians and large tech companies oppose the Artificial Intelligence Security Act
Before Newton made the decision, the bill was not popular among legislators, advisers, and major tech companies.
House Speaker Nancy Pelosi and companies like OpenAI have said that this will seriously hinder the development of artificial intelligence.
Neil Chilson, head of AI policy at Abundance Institute, warned that while the bill is primarily aimed at models with significant cost and scale (models costing over 100 million US dollars), its scope could easily expand to target smaller developers.
But some people are open to the bill. Billionaire Elon Musk is developing his own artificial intelligence model ‘Grok’ and is one of the few tech leaders who support the bill and the more widespread development of AI regulations.
In a post sent to X on August 26th, Musk said, ‘California should pass the SB 1047 Artificial Intelligence Security Act,’ but he admitted that supporting the bill was a ‘difficult decision’.