A new study conducted by the French nonprofit SaferAI has found that some of the world’s top AI labs, including Elon Musk’s xAI, lack adequate safety measures. The study evaluated the risk-management practices of these companies in order to develop a clear standard for how AI companies handle risk as the technology continues to grow in power and usage. AI systems have already demonstrated their ability to anonymously hack websites and assist in the development of bioweapons, highlighting the importance of strong risk management practices.
Governments have been slow to implement regulatory frameworks for the AI industry, with a recent California bill aimed at regulating AI being vetoed by Governor Gavin Newsom. SaferAI’s founder, Siméon Campos, notes that AI risk management is not keeping pace with the rapid advancements in AI technology. The nonprofit’s ratings aim to fill a gap in the assessment of AI companies’ risk management practices until governments take action.
Researchers for SaferAI assessed the “red teaming” of models and the companies’ strategies for modeling threats and mitigating risk in order to grade each company. xAI, Meta, and Mistral AI were identified as having “very weak” risk management practices, while OpenAI and Google Deepmind received “weak” ratings. Anthropic led the pack with a “moderate” score of 2.2 out of 5. xAI received the lowest possible score of 0/5 due to its lack of published information on risk management.
Campos hopes that the ratings will pressure companies like xAI to improve their internal risk management practices, particularly as their models become increasingly competitive in the AI space. He suggests that xAI may publish information on risk management in the future, which could potentially lead to an updated grade for the company. The study highlights the need for AI companies to prioritize risk management as AI technology continues to advance and become more integrated into various sectors.
Overall, the study conducted by SaferAI sheds light on the inadequacies of safety measures in some of the top AI labs around the world. With AI technology rapidly evolving and demonstrating potential risks, the importance of strong risk management practices cannot be understated. The ratings provided by SaferAI serve as a call to action for AI companies to prioritize risk management and work towards developing robust strategies to mitigate potential threats associated with AI technology.