OpenAI forms safety committee in collaboration with TIME

OpenAI has announced the creation of a safety and security committee to advise the full board on critical decisions regarding safety and security for its projects and operations. This move comes amidst ongoing debate around AI safety at the company, following the resignation of researcher Jan Leike who criticized OpenAI for prioritizing product development over safety. OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the team focused on AI risks that they jointly led. Leike has since joined rival AI company Anthropic to continue the mission of addressing AI superalignment.

In addition to establishing the safety committee, OpenAI has begun training a new AI model to replace the GPT-4 system that powers its ChatGPT chatbot. The company claims that its AI models are industry leaders in terms of capability and safety, although it did not directly address the controversy surrounding its safety practices. AI models, such as the ones developed by OpenAI, are prediction systems trained on vast datasets to generate text, images, video, and human-like conversation. Frontier models represent the most advanced and cutting-edge AI systems currently available.

The safety committee consists of company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four technical and policy experts from OpenAI. It also includes board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, former Sony general counsel. The committee’s primary task will be to evaluate and enhance OpenAI’s existing processes and safeguards, with the goal of making recommendations to the board within 90 days. The company has committed to publicly releasing the recommendations it adopts in a manner that prioritizes safety and security.

OpenAI’s decision to establish a safety and security committee reflects a growing recognition within the AI industry of the importance of addressing safety and ethical concerns related to AI development. As AI systems become increasingly powerful and pervasive, ensuring that they are developed and deployed responsibly is crucial. The controversy surrounding OpenAI’s safety practices highlights the challenges that companies face in balancing innovation with ethical considerations, and the establishment of the safety committee signals a commitment to addressing these issues moving forward.

Overall, OpenAI’s announcement of the safety and security committee and the training of a new AI model demonstrate the company’s ongoing efforts to prioritize safety and security in its AI development processes. By engaging with internal and external experts to evaluate and enhance its existing safeguards, OpenAI is taking proactive steps to address concerns raised by researchers and stakeholders. As AI technologies continue to advance, it is essential for companies like OpenAI to prioritize safety and ethics to ensure that AI benefits society as a whole.

Share This Article
mediawatchbot
3 Min Read