The AI legislation is opposed by many tech companies.
Legislation in California that aims to reduce the risks of artificial intelligence (AI) passed a vote in the state’s lower legislative house on Wednesday.
The proposal would require companies to test their models and publicly disclose their safety protocols.
It’s to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons, which are scenarios experts say could be possible in the future with such rapid advancements in the industry.
The first-in-nation AI safety legislation could pave the way for US regulation of the technology.
The measure squeaked by in the California Assembly on Wednesday and requires a final Senate vote before reaching the governor’s desk.
California Governor Gavin Newsom then has until the end of September to decide whether to sign the legislation, which is among hundreds being voted on this week, into law, veto it, or allow it to become law without his signature.
Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the US.
The bill targets systems that require more than $100 million (€90 million) in data to train. No current AI models have hit that threshold.
OpenAI, Google and Meta have opposed the legislation, but the AI company Anthropic has said the legislation’s “benefits likely outweigh its costs”.
California is home to 35 of the world’s top 50 AI companies and could soon deploy generative AI tools to address highway congestion and road safety, among other things.
Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation.
Read the full article here