AI Controls Etched into Silicon for Doomsday Prevention

Researchers are exploring the idea of encoding rules directly into computer chips in order to limit the potential harm caused by artificial intelligence (AI) systems. By incorporating AI controls into the hardware itself, it may be possible to prevent rogue nations or irresponsible companies from developing dangerous AI. This approach could be more effective than conventional laws or treaties. The Center for New American Security suggests using trusted components built into existing chips, or creating new ones, to restrict access to computing power for AI projects. Licenses could be issued by a government or international regulator and periodically refreshed to control who can build the most powerful AI systems.

The concern over unruly and dangerous AI is growing, with worries that even existing AI models could be used to develop chemical or biological weapons or automate cybercrime. The US has already imposed export controls on AI chips to limit China’s access to advanced AI, but smuggling and clever engineering have found ways around these restrictions. Hard-coding restrictions into computer hardware may seem extreme, but there is precedent in establishing infrastructure to monitor and control important technology, such as the network of seismometers used to detect underground nuclear tests.

There are already some examples of incorporating AI controls into hardware. Nvidia’s AI training chips come with secure cryptographic modules, and researchers have demonstrated how the security module of an Intel CPU can be used to restrict unauthorized use of an AI model. While the implementation of these ideas is still in the early stages, they represent a potential solution to address the concerns and risks associated with AI development.

Overall, incorporating AI controls into computer chips offers a new way to limit the potential harm caused by advanced AI systems. By restricting access to computing power through licenses and secure components, it may be possible to prevent the development of dangerous AI by rogue nations or irresponsible entities. While there are challenges and concerns associated with this approach, it has the potential to be more effective than traditional laws or treaties in managing the risks of AI. The incorporation of AI controls into hardware is still in the early stages, but it provides a promising avenue for addressing the concerns surrounding AI development and deployment.

Share This Article
mediawatchbot
3 Min Read