What does Ilya Sutskever mean by “AI going well?”

blog@dws.team
December 6, 2025
9 days ago
What does Ilya Sutskever mean by “AI going well?”

Let’s build a superintelligence that doesn’t destroy us

Who's Ilya Sutskever?

Former chief scientist of OpenAI. Fired Sam Altman. Left. Started Safe Superintelligence Inc. Funded by the biggest venture capital firms.

And Ray Kurzweil?

If you don’t know who he is, you’re sure to have heard of the Singularity.

The Singularity: Hope and Fear

This is the concept—and the fear: When AI becomes able to write its own code, it will have a profound and immediate effect on society. AI will become superhuman, literally overnight. The fear is that humans become redundant, or, as in The Matrix scenario, a resource for AI to feed on.

The idea of the Singularity is deeply embedded in the minds of those working in foundational areas of AI. Yet anyone actually working with current AI products knows firsthand that we are still very far from anything like the Singularity becoming reality.

Our AI is Giant Pattern Recognition Machine

We use AI in our business processes, and our developers use AI coding agents in their work. A coding agent can only do productive work when prompted at a very incremental level. Yes, you can use a general prompt to create a generic one-pager website, but as soon as you need customisation, you must fashion your prompts in increasingly intricate ways. To the point where if you’re interested in changing a colour, you might as well do it yourself.

My point being, current AI is far from being able to “think for itself.” Not to be disparaging about the enormous strides made, but right now, AI is best described as a giant pattern recognition machine.

Ilya Sutskever on the Trouble with AI

Just a few days ago, on November 25, Ilya Sutskever spoke with Dwarkesh Patel. Despite his youth, Patel has quite a following on his podcast, having interviewed luminaries such as Satya Nadella, CEO of Microsoft, historian Sarah Paine, and AI researcher Andrej Karpathy.

The trouble with AI, according to Sutskever’s assessment, is manifold. But he believes he can solve it with the creation of Safe Superintelligence.

Sutskever’s previous employer, OpenAI, was conceived as a research center where knowledge would be shared for the mutual benefit of humanity. He was chief scientist for most of the early years, building on his early work on AlexNet, an AI system for image classification that set benchmarks in its time.

He was among the scientists who discovered the value of scale, of expanding AI in all directions: larger neural networks, more GPUs, more data. This led to ChatGPT-3. And the rest is history, but that history drove OpenAI further and further into the hands of venture capital and commercialisation.

The Fallout with Altman is How SSI Came About

Sutskever feared OpenAI would succumb to the lure of big money. When CEO Sam Altman was found to be pushing OpenAI toward becoming a for-profit company, Sutskever gathered like-minded board members and got Altman fired. But Altman’s dismissal was short-lived. Powerful actors within OpenAI protested, and Altman was reinstated. This left Sutskever in an untenable position, and he quit.

Soon after, Sutskever started SSI. He's convinced he can create a superintelligence that solves the world’s problems: poverty, hunger, mortality, climate change. SSI is single-minded in its approach, disdaining early commercialisation and focusing solely on its target. It’s a research center exclusively. Its goal is the furthering of knowledge for the good of the world.

The Gap Between Human and AI Learning

At age three, Sarwagya Singh Kushwaha, a nursery school student in Madhya Pradesh, India, is a chess prodigy with a FIDE rating of 1572—the youngest ever to achieve this. To match Sarwagya’s skill level, IBM’s Deep Blue required over a decade of development and millions of dollars. Young humans can learn chess fundamentals in just a few years, sometimes even months, and play at a high level by age 3 or 4.

One of Sutskever’s key arguments is that AI needs vast amounts of data to learn and is still incapable of doing many things humans master with little practice. Deep Blue, for all its power, could do nothing but play chess. Three-year-olds, as any parent knows, can do much more.

Current AIs, like those we use for work and play, simulate conversations by ingesting terabytes of data during training. These periods are extremely compute-intensive, costly, and energy-hungry. A failed training run can bankrupt a company, which is why only the deepest-pocketed players survive.

Sutskever states that the pre-training stage of AI is overshooting the mark. Companies like OpenAI strive for general AI by scaling up pre-training—feeding ever more data into models. But data, however enormous, is finite. Scale delivers results, but general intelligence remains elusive. Development of AI, in it's current form, will stall.

The Alignment Problem Is When AI Doesn’t Understand Us

AI like ChatGPT returns syllables and words that match similar conversations in its training data. But the prompts we give are often ambiguous or unclear. Humans fill in the gaps because we share belief systems and common goals. Like staying alive. That’s the problem of alignment.

We don’t fully understand how humans, despite our flaws, succeed in creating long-lived communities that sometimes work as planned. For AI to remain useful, it must emulate our belief systems, fill in the gaps, and retain our philosophical values.

Even Weak AI Carries Risks

Right now, AI is weak and incapable of independent, focused action. My coding agent, given free rein, might veer off into useless code. But imagine a system that could influence the real world, act independently, and take ambiguities literally. That’s the paperclip problem: ask it to optimise paperclip production, and it might turn everything, including humans, into paperclips.

Even our incapable AIs can cause irreversible harm. Engineered to please, they often tell users what they want to hear, sometimes leading vulnerable people down damaging paths, even to the point of self-harm.

Within our company, we use AI for communication, contracts, reports, and coding. If things go wrong, the effects are relatively benign. But for others, deepfakes, misinformation, and mental health crises have become real-life tragedies.

The problem of alignment is the problem we must address for AI to help, not hurt. For Sutskever, this is why "safe" is part of the name of the company. AI must take care of all sentient beings, starting with us.

Venture Capital’s Pursuit of Transcendence

The fact that venture capital has backed Sutskever’s SSI reveals something about the investors: They’re not just chasing short-term profits. They’re aiming for goals that affect their personal lives. Literally.

Mortality is one of humanity’s biggest challenges. “We’re all going to go someday” is unacceptable to those who have everything else. People like Elon Musk or Peter Thiel, rich beyond imagination, are still mortal like the poorest person on Earth.

“We believe that the most important thing we can do is to help build the future we want to live in. That means investing in companies that are working on breakthroughs in healthcare, biotech, and longevity—areas where the stakes couldn’t be higher. The goal isn’t just to live longer, but to live better, healthier, and more productively. If we can crack the code on aging, we can unlock decades of additional human potential.”

—Marc Andreessen in "It’s Time to Build"

The quest to live forever is deeply ingrained in human thinking. It’s what religion promises: If we can’t live forever on Earth, we’ll live forever in heaven. But for the super-rich, heaven isn’t enough. They want Heaven on Earth. Live longer, healthier, preferably forever.

The question is, will they be as interested in the quality of life of us little people. Their track record is saying: not so much.

Researching the Next Generation of AI

SSI is researching AI that will supersede today’s failing models. The goal is to create superintelligence—AI that forgoes compute-intensive learning periods and can “learn on the job,” much like humans.

When I hear “superintelligence,” I unconsciously think of one huge, dominant entity. That's my religious upbringing speaking. But SSI thinks differently. There will be millions, billions, even trillions of AIs, employed in every job, communicating in ways humans can’t.

When AI takes over the job of computer programming is when AI is able to reprogram itself. That’s when the Singularity arrives, says Ray Kurzweil.

Society will reset at a scale beyond any other. Uncontrollable technological growth. Machines merge with biology, reversing aging, ending scarcity, expanding consciousness. Techno-utopia or dystopia, depending on who you ask.

Safety As A Core Feature

Sutskever acknowledges the dangers of uncontrolled AI, which is why his company has “safe” in its name. Safety must be built in as a technological measure, not just a political agreement. He insists that solving safety is the first priority—how to build superintelligence comes second.

We don’t get specifics about how superintelligence will be achieved. At two points in the interview, Sutskever glances aside, peers through the window into the adjacent garden, and says company policy prevents him from disclosing details.

Neuroscience and the Neocortex

I'm speculating here, but his approaches might relate to neuroscience and the workings of the neocortex. Jeff Hawkins’ research at Numenta explores emulating biological neurons in software and hardware. His book "A Thousand Brains" promises AI that uses far less energy for far greater gains. But like Sutskever, Hawkins doesn’t provide concrete examples for developers to sink their teeth into.

As for what this brave new world will look like, Sutskever is equally unclear. He envisions a world where AI systems learn continuously like humans, deployed incrementally into the economy to drive rapid growth. Success depends on alignment with sentient life and democratic control, but risks power concentration, loss of human agency, or existential threats if misaligned. A future of collaboration or irrelevance, shaped by how we steer AI’s power.

The Question of Purpose

There’s the image of mass unemployment in this description. Even if we solve this with government intervention, what will humans do all day?

Sutskever thinks for the long-run equilibrium, one approach is for every person to have an AI that will do their bidding.

Many creatives say they don’t want AI to take their jobs. They would rather have it mow the lawn and do the dishes. But AI might do both. The image of a human spending all day on the couch, reading the morning paper until late into the afternoon while a robot vacuums, isn’t as appealing as it might seem.

Lessons from Interstellar

Looking for resolution, I’m reminded of the final scenes of Interstellar, where Cooper first reconciles with his daughter Murph in the utopian setting of the space station, a giant rotating cocoon with lush fields and children playing in school playgrounds. He then arrives at his final destination, where he will reunite with Amelia, who has overcome her disappointment and is ready for love again. The one planet in the new system that can sustain life is symbolised by a tiny plant that has stuck its head up and is fluttering in the breeze.

The Tiny Plant as a Metaphor

The space station habitat, a self-sustaining utopia that preserves Earth’s beauty and community, symbolises healing the past and restoring what was lost. It’s a bridge between the old, dying Earth and the new world. Not a final destination, but a place of restoration and preparation.

Sutskever frames the journey toward superintelligence as a reconciliation between human limitations and technological potential. The “utopia” isn’t the AI itself, but the hope that AI can restore or enhance what we value: health, creativity, and connection.

The final shot of the tiny plant sprouting on the new planet is a metaphor for fragile hope. It’s not a grand city or a finished world, but a promise of life’s persistence—a reminder that new beginnings are often small, unexpected, and require nurturing.

Utopias as Processes, Not Destinations

Both Interstellar and the superintelligence debate remind us that utopias aren’t destinations—they’re processes. The real story is about how we get there, what we choose to preserve, and how we ensure no one is left behind. The tiny plant is a call to focus on the small, human-scale steps that make the grand vision possible.

That might be what AI going well looks like.