AI poses extinction-level risk, according to state-funded report

A report commissioned by the U.S. government warns of substantial national security risks stemming from artificial intelligence (AI), which could potentially pose an “extinction-level threat to the human species.” The rise of advanced AI and artificial general intelligence (AGI) has the potential to destabilize global security in ways similar to the introduction of nuclear weapons. While AGI systems do not currently exist, leading AI labs are actively working towards their development, with many expecting AGI to arrive within the next five years.

The authors of the report conducted research for over a year, speaking with over 200 government employees, experts, and workers at frontier AI companies such as OpenAI, Google DeepMind, Anthropic, and Meta. Accounts from these conversations suggest that many AI safety workers within cutting-edge labs are concerned about perverse incentives driving decision-making by company executives. There is a growing fear that safety considerations are being overlooked in favor of advancing AI technology.

The report, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” proposes sweeping policy actions that, if enacted, would significantly disrupt the AI industry. One recommendation is to make it illegal to train AI models using more than a certain level of computing power. The threshold for computing power would be determined by a new federal AI agency, with the report suggesting it be set just above current levels of AI capabilities. This recommendation is aimed at preventing the development of overly powerful AI systems that could pose significant risks to national security.

The report also suggests the creation of a National Research Cloud to provide secure and regulated computing resources for AI research. This would help prevent the concentration of AI resources in the hands of a few powerful companies, reducing the potential for misuse or abuse of AI technologies. Additionally, the report calls for increased transparency and accountability in AI research and development, including regular audits of AI systems to ensure compliance with safety and security standards.

Overall, the report emphasizes the urgent need for the U.S. government to take decisive action to address the national security risks posed by advanced AI technologies. With the potential for AGI to arrive within the next five years, the authors stress the importance of implementing proactive policies to safeguard against potential threats. By enacting the recommendations outlined in the report, the U.S. government can help ensure the safe and responsible development of AI technologies while protecting national security interests.

Share This Article
mediawatchbot
3 Min Read