Safe Superintelligence (SSI), the stealth-mode AI venture founded by OpenAI co-founder Ilya Sutskever, has reportedly secured an additional $2 billion in funding—pushing its valuation to a staggering $32 billion, according to The Financial Times.
This latest investment adds to an earlier $1 billion round, and comes amid increasing industry buzz that a second billion-dollar raise was already underway. Although SSI hasn’t publicly confirmed the deal, sources say the round was led by growth investor Greenoaks, known for backing ambitious tech bets.
Founded in mid-2024 by Sutskever alongside former Apple and AI entrepreneur Daniel Gross and AI researcher Daniel Levy, SSI’s vision is strikingly narrow—but bold: to build one thing and one thing only—a safe superintelligence.
While the concept of superintelligence has long hovered on the edges of sci-fi and speculative AI theory, Sutskever and his team are treating it as a real engineering challenge. And they want to be first. “We are not building a product suite. We are not racing to release features. But we are building a safe superintelligence. That is our one goal,” their site proclaims.
Ilya Sutskever Startup is A Startup With No Product—Yet
So far, SSI has remained highly secretive. Its website contains little more than a mission statement. There’s no timeline, no demo, and no public roadmap—yet that hasn’t stopped the company from attracting billions in capital and some of the top minds in the AI field.
The funding and valuation place SSI firmly in the elite tier of AI companies—even without a product in market. It’s a bold contrast to the strategy of rivals who release incremental updates and chase commercial applications. For SSI, the bet is that one powerful breakthrough—done right—can reshape everything.
Why This Matters
Sutskever’s departure from OpenAI followed a dramatic leadership crisis in May 2024, when he was reportedly involved in efforts to remove CEO Sam Altman—an attempt that ultimately failed. His move to launch a new AI lab so soon afterward raised eyebrows, but also signaled his continued belief in the mission of building artificial general intelligence (AGI)—safely.
With global debates heating up over how AI should be regulated, controlled, and deployed, SSI is taking a purist’s approach: safety and capability must be built together, from day one.
As of now, there’s no clear launch date for SSI’s first model or even a prototype. But with $3 billion in the bank, a tight-lipped team of top-tier researchers, and a singular mission, SSI may be shaping up to become one of the most closely watched players in the next wave of AI development.