Ex-OpenAI Scientist Launches New Venture

Ilya Sutskever, a co-founder and former chief scientist of OpenAI, has announced the launch of a new venture called Safe Superintelligence Inc. The focus of this new lab will be on building a safe “superintelligence”, which refers to a hypothetical system that is smarter than humans. Sutskever is joined by co-founders Daniel Gross, an investor and engineer who previously worked on AI at Apple, and Daniel Levy, another former OpenAI employee. The American-based firm will have offices in Palo Alto, Calif., and Tel Aviv.

Sutskever was a founding member of OpenAI and served as chief scientist during the company’s rapid growth following the release of ChatGPT. In November, he was involved in an attempt to remove OpenAI CEO Sam Altman, but later changed his mind and supported Altman’s return. When he announced his resignation in May, Sutskever expressed confidence in OpenAI’s ability to build AGI (Artificial General Intelligence) under Altman’s leadership. Safe Superintelligence Inc. has stated that it will only focus on building the system in its name, in order to avoid commercial pressures.

The new venture’s singular focus on building a safe superintelligence will allow it to avoid distractions from management overhead or product cycles, according to its founders. This approach contrasts with criticisms leveled at OpenAI by former employees, including accusations of prioritizing “shiny products” over safety. Jan Leike, a senior OpenAI member who co-led a safety team with Sutskever, raised concerns about the company’s priorities, leading to the departure of six other safety-conscious employees. Altman and OpenAI’s President, Greg Brockman, acknowledged that there was more work to be done on safety measures.

Safe Superintelligence Inc. has not disclosed details about its funding or business model, leaving questions about how the venture will operate in the future. The company’s decision to focus solely on building a safe superintelligence may position it as a direct competitor to OpenAI, which has faced criticism over its approach to AI safety. Sutskever’s new venture represents a shift in focus towards ensuring that advanced AI systems are developed in a safe and beneficial manner, addressing concerns about the potential risks posed by superintelligent systems.

Share This Article
mediawatchbot
3 Min Read