A group of prominent professors, including Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell, have penned a letter urging key lawmakers to support a California AI bill as it reaches the final stages of the legislative process. The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener earlier this year and aims to regulate the development of AI systems to mitigate potential risks. The professors argue that without proper oversight, the next generation of AI systems could pose severe risks, making the bill essential for effective regulation of this technology.
In their letter, the professors highlight the lack of regulations on AI systems that could potentially have catastrophic consequences, comparing the current oversight to that of sandwich shops or hairdressers. They emphasize the importance of conducting rigorous safety testing for potentially dangerous capabilities and implementing comprehensive safety measures to address the risks associated with AI development. The bill has already passed in the California senate and is set to face a vote in the state assembly later this month.
California, as the world’s fifth-largest economy and home to many leading AI developers, is seen as playing a crucial role in regulating AI by the authors of the letter. With Congress deadlocked on the issue and potential opposition to Biden’s AI executive order if Republicans were to win in November, the passage of this bill in California is seen as essential for setting a standard for AI regulation. If passed, the bill would apply to all companies operating in the state, ensuring that AI development is conducted in a safe and secure manner.
Despite polls showing support for the bill among the majority of Californians, industry groups and tech companies have voiced opposition to the proposed legislation. The authors of the letter urge lawmakers to consider the potential risks associated with AI systems and the need for regulation to protect against these risks. The bill aims to establish a framework for safety testing and risk mitigation in AI development, addressing concerns about the lack of oversight in this rapidly advancing field.
In their letter, the professors call on key lawmakers, including Mike McGuire, Robert Rivas, and Governor Gavin Newsom, to support the bill and ensure its passage into law. They emphasize the importance of California taking a leading role in regulating AI to set a standard for other states and countries to follow. With the potential for catastrophic risks posed by unchecked AI development, the professors argue that the bill is the bare minimum needed to effectively regulate this technology and protect against potential harm.