Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May.
Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel Gross, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems.
The formation of SSI follows the brief November 2023 ousting of OpenAI’s CEO Sam Altman, in which Sutskever played a central role before later expressing regret over the situation.
In a message on SSI’s website, the founders state::
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sutskever’s work at SSI represents a continuation of his efforts at OpenAI, where he was part of the superalignment team tasked with designing control methods for powerful new AI systems. However, that group was disbanded following Sutskever’s high-profile departure.
According to SSI, it will pursue safe superintelligence in “a straight shot, with one focus, one goal, and one product.” This singular focus stands in contrast to the diversification seen at major AI labs like OpenAI, DeepMind, and Anthropic over recent years.
Only time will tell if Sutskever’s team can make substantive progress toward their lofty goal of safe superintelligent AI. Critics argue the challenge represents a matter of philosophy as much as engineering. However, the pedigree of SSI’s founders means their efforts will be followed with great interest.
In the meantime, expect to see a resurgence of the “What did Ilya see?” meme:
Source: NEWS
Recent Comments