Photo Credit: Avigail Uzi

In the world of artificial intelligence, Ilya Sutskever, the former co-founder and chief scientist of OpenAI, has raised $1 billion for his new AI company, Safe Superintelligence (SSI). The funding round, which included investments from prominent venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, as well as NFDG, an investment partnership co-run by SSI executive Daniel Gross, underscores the continued interest in exceptional talent focused on foundational AI research.

Sutskever, who left OpenAI in May, co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. The new company, which currently has 10 employees, is focused on developing safe artificial intelligence systems that can far surpass human capabilities. According to sources close to the matter, SSI is valued at $5 billion.

"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," Sutskever wrote on his  X (formerly known as  Twitter) in May to In a bid to  announce the new venture. This singular focus, the company says, will enable it to avoid distractions from management overhead or product cycles, ensuring that safety, security, and progress are"insulated from short-term commercial pressures."

Sutskever's departure from OpenAI was marked by a dramatic turn of events within the company. Last year, he was part of the board of OpenAI's non-profit parent that voted to oust CEO Sam Altman, citing a "breakdown of communications." However, within days, Sutskever reversed his decision and joined nearly all of OpenAI's employees in the  signing of  a letter demanding immediate  Altman's return and  the board's resignation. This incident diminished Sutskever's role at OpenAI, and he was subsequently removed from the board and left the company in May.

Following Sutskever's departure, OpenAI disbanded his "Superalignment" team, which had been tasked with ensuring that AI remains aligned with human values in preparation for the day when AI exceeds human intelligence. This move was seen by some, including former OpenAI employee Jan Leike, as a shift away from OpenAI's "safety culture and processes."

In contrast to OpenAI's unorthodox corporate structure, which was implemented for AI safety reasons but ultimately led to the CEO's temporary ouster, SSI has a regular for-profit structure. The company is currently focused on building a small, highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel.

As SSI embarks on its mission to develop safe superintelligent AI, the $1 billion in funding it has raised will be used to acquire computing power and attract top talent. Sutskever, who was an early advocate of the scaling hypothesis that has driven AI advancements, says he will approach scaling in a different way than his former employer, though he did not provide details.

With its singular focus on safe superintelligence and its commitment to insulating its work from short-term commercial pressures, SSI represents a new chapter in the ongoing quest for the responsible development of transformative artificial intelligence.

Add comment

Security code
Refresh

REGISTER FOR DAILY NEWSLETTER

Please enable the javascript to submit this form

RECENT NEWS

LATEST JOB OFFERS

LIFESTYLE/TRAVEL