OpenAI, tech giants must notify US government of new AI projects

The US government is preparing to use the Defense Production Act to require tech companies to inform them when they train an AI model using a significant amount of computing power. This move comes as a result of OpenAI’s ChatGPT catching many by surprise last year. The requirement will give the government access to sensitive information about AI projects inside companies like OpenAI, Google, and Amazon. Companies will also have to provide information on safety testing being done on their AI creations.

OpenAI has been secretive about its work on a successor to GPT-4, but with this new requirement, the US government may be the first to know about any developments on GPT-5. The exact date of implementation and the government’s actions based on the received information are yet to be announced. The new rules are part of a White House executive order issued in October 2021, which gave the Commerce Department a deadline to come up with a scheme for companies to inform officials about powerful new AI models in development.

The executive order specifies that companies should provide details such as the amount of computing power being used, information on data ownership, and safety testing information. The initial bar for reporting is set at 100 septillion floating-point operations per second (flops) for AI models, with a lower threshold for large language models working on DNA sequencing data. Companies like OpenAI and Google have not disclosed the computing power used for training their most powerful models, but it is suggested that the 1026 flops used for training GPT-4 is slightly beyond the threshold mentioned.

In addition to the AI model reporting requirement, the Commerce Department will also implement another aspect of the executive order. Cloud computing providers like Amazon, Microsoft, and Google will be required to inform the government when a foreign company uses their resources to train a large language model. This reporting requirement will also apply when the foreign project crosses the initial threshold of 100 septillion flops.

Overall, these new requirements aim to provide the US government with greater visibility into AI breakthroughs and projects, particularly those involving large language models. By having access to this information, the government can better understand and assess the potential risks and benefits associated with these advancements in AI technology.

Share This Article
mediawatchbot
3 Min Read