Former OpenAI safety researcher William Saunders, who resigned in February, is risking the potential loss of millions of dollars in vested equity by speaking out about his concerns regarding the company. Despite OpenAI indicating that they do not intend to enforce the non-disparagement agreement he signed upon leaving, Saunders believes it is crucial to have a public conversation about the developments within AGI companies. This sentiment is shared by 13 current and former employees of OpenAI and Google DeepMind, who recently called for stronger whistleblower protections to allow employees to voice safety concerns about advanced AI technology.
The open letter from the employees highlights a growing fear that the rapid advancement of AI technology could lead to dangerous consequences if not properly regulated and monitored. The departures of key individuals from OpenAI, such as chief scientist Ilya Sutskever and senior safety researcher Jan Leike, have raised concerns about the prioritization of safety within the company. Sutskever’s departure was reportedly linked to a decision regarding former CEO Sam Altman, while Leike expressed dissatisfaction with OpenAI’s safety culture taking a backseat to product releases.
Daniel Kokotajlo, another former employee who contributed to the open letter, resigned from OpenAI in April due to concerns about the lab’s ability to act responsibly in the event of creating artificial general intelligence (AGI). AGI is a speculative technology that is being pursued by top AI labs, including OpenAI and Google DeepMind. The potential risks and ethical implications of AGI development have prompted calls for increased transparency and accountability within these organizations.
The employees’ plea for a “right to warn” regulators, board members, and the public about safety concerns reflects a broader push for more stringent oversight of AI research and development. As the field continues to progress, there is a growing recognition of the need for ethical guidelines and safeguards to prevent the misuse or unintended consequences of AI technologies. The willingness of former employees like Saunders, Sutskever, Leike, and Kokotajlo to speak out signals a shift towards greater accountability and responsibility within the AI industry.
Ultimately, the issues raised by departing employees and the call for stronger whistleblower protections highlight the complexities and challenges associated with the development of advanced AI technologies. As the industry grapples with questions of safety, ethics, and regulation, it is crucial for organizations like OpenAI and Google DeepMind to prioritize transparency, accountability, and public dialogue in order to ensure that AI is developed and deployed responsibly. The willingness of individuals like William Saunders to take a stand and speak out against potential risks underscores the importance of fostering a culture of openness and collaboration within the AI community.