In May 2023, three prominent CEOs in artificial intelligence, Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic, met with U.K. Prime Minister Rishi Sunak at No. 10 Downing Street in London. The meeting was prompted by the release of ChatGPT six months earlier and aimed to discuss the potential opportunities and risks of AI for the U.K. economy. Despite initially focusing on the benefits of AI, Sunak surprised the CEOs by expressing concerns about the risks associated with this technology. He invited them to attend the world’s first AI Safety Summit, which the U.K. planned to host later that year, and requested prerelease access to their latest AI models for testing by a newly established task force.
The U.K. became the first country to reach an agreement with leading AI labs to grant access to their advanced AI models for evaluation of potential risks. Following the initial meeting, Sunak formalized the task force as the AI Safety Institute (AISI), which has since become the most advanced program within any government for assessing AI risks. With £100 million in public funding, the AISI is dedicated to evaluating the dangers posed by AI technologies and ensuring their safe deployment. The institute operates similarly to the country’s COVID-19 vaccine unit, with a focus on testing and mitigating potential risks associated with AI advancements.
The establishment of the AISI marks a significant step in addressing the growing concerns surrounding AI technologies and their potential impact on society. By collaborating with leading AI companies and gaining access to their latest models for testing, the U.K. government aims to proactively identify and address any risks associated with AI advancements before they become widespread. This proactive approach reflects the government’s commitment to promoting the responsible development and deployment of AI technologies for the benefit of society.
The U.K.’s initiative to establish the AISI and engage with leading AI companies highlights the importance of addressing AI risks at a national level. By convening with industry leaders and creating a dedicated institute for AI safety, the government demonstrates its commitment to ensuring that AI technologies are developed and deployed responsibly. This collaborative effort between government and industry stakeholders sets a precedent for other countries to follow in addressing the challenges and opportunities presented by AI advancements.
Overall, the U.K.’s approach to AI safety through the establishment of the AISI and collaboration with leading AI companies exemplifies a proactive and responsible strategy for addressing the risks associated with AI technologies. By investing in research, testing, and evaluation of advanced AI models, the government aims to mitigate potential dangers and promote the safe deployment of AI for the benefit of society. This collaborative effort between government and industry stakeholders sets a positive example for other countries seeking to address the challenges and opportunities presented by AI advancements.