AI is like anything else. It’s not the invention that counts, it’s what you do with it.

blog@dws.team
January 3, 2026
about 21 hours ago
AI is like anything else. It’s not the invention that counts, it’s what you do with it.

We learned to control fire, then to cook with it. Control our larynx, then to talk with it. What are we going to do with AI?

It seems that whenever science has identified a behaviour that differentiates humans from other animals, they find a species that does it too. Chimps and even ants making tools for example. Whales producing sounds that convey meaning.

The invention itself, however, is not the point. It’s that it’s taken as a starting point that evolves into something very different. Something much more complex and marvellous.

For AI companies, reasoning is the holy grail.

The hardest thing. But what do they mean by it? Do they mean the longest jumps, connecting concepts light years apart to form something completely new?

As a teenager I was an avid reader of science fiction. There’s a book, a passage of which always comes back to me when the subject of jumps of imagination comes up.

The writer had a term for these jumps: the jaunte. Humans had found a way to teleport, to physically displace themselves outside of the realm of time. But their teleportation capabilities are limited, maybe it’s line of sight or something else, I don’t remember. The holy grail of being able to jaunte through outer space eludes them.

But our hero finds a way, and the story ends with what for me is a metaphor of leaps of imagination. Now they can jaunte anywhere: one moment they’re at the moons of Jupiter, the next instance around Alpha Centauri, then at the centre of some faraway galaxy.

We’re a small software service company that builds and manages custom applications for unique solutions to product problems. Like many companies, we’ve become accustomed to using AI throughout our business, and since we’re developers, we use coding agents.

My coding agent of choice is Warp, which is a coding agent that lives in the Terminal rather than in an IDE. Either by accident or otherwise, the agent sometimes shows its reasoning process.

Reasoning in AI does not at all resemble leaps of imagination.

What does “reasoning” mean in AI? It’s framed as the ability to connect disparate concepts, to make inferential leaps, or to generate new ideas from existing knowledge. But current AI systems operate within a far more constrained space. Their “leaps” are bounded by the data they’ve been trained on and the algorithms that guide them.

We say reasoning process, but what we’re seeing is a form of step-by-step problem-solving.

Pattern matching: Recognising similarities between the current problem and problems it has encountered before.

Decomposition: Breaking down complex tasks into smaller, manageable steps.

Iteration: Refining solutions through feedback loops, often invisible to the user.

Mind you, this is totally impressive. In practice, it’s the difference between having your coding agent stuck in a loop trying to solve a problem a junior could fix in a second, the situation I often faced just months ago, and now, where such quirks are almost nonexistent.

Discontinuity is key for true creativity. Science and philosophy show how disjointed we are.

The “jaunte” in my science fiction analogy represents something beyond incremental improvement. Discontinuity.

For me, philosophy and science fiction lie very close to each other.

Let’s bring Ludwig Wittgenstein and Frederich Nietzsche to the stage. Philosophers who have very different approaches to creativity.

“The limits of my language are the limits of my world.”

Born into one of the wealthiest families of Austria, he gave away most of his fortune to the Arts. He wrote his Tractatus Logico-Philosophicus in the trenches of WWI. A work of pure logic, mapping propositions to the world. But it was his critique of this his own work for which he is most well known. Language games. Or maybe better, a game theory of language.

Read the Wikipedia entry “Language game” and you’ll discover that Wittgenstein explains language as a fluid, context-driven set of learned structures. If you keep LLMs in the back of your mind, you can’t help noticing similarities to how an AI predicts the next token based on pattern recognition and context.

Unsurprisingly therefore, Wittgensteins stance on creativity is based on language.

Just as a game’s rules enable play, language’s structures enable meaning. Creativity is about playing with rules. Ideas and concepts aren’t connected by rigid definitions but by overlapping similarities. Creativity involves discovering these similarities, making new connections.

Again, very close to how my coding agent does reasoning.

How different is Nietzsche.

“One must still have chaos in oneself to be able to give birth to a dancing star.”

For Nietzsche, creativity is an expression of a primal, irrational force that transcends reason and convention. He associates it with chaos, with the instinctual, the ecstatic. In contrast to order, logic, and form.

Nietzsche distrusts systematic thought and language, seeing them as constraints to the raw, vital energy of creativity.

We can’t understand Nietzsche without understanding his sources. He was an avid student of Greek mythology and philosophy and his basic premises came from his understanding of the contrast between the Dionysian and the Apollonian, after the Greek gods Dionysius and Apollo. The first standing for the wild, ecstatic celebrations of life, with copious amounts of wine consumed, the second for thoughtful reason and rule-giving.

The Dionysian, says Nietzsche, is the true source of creativity.

Modern neuroscience is bridging instinct and language.

Adam Marblestone is an important figure in neuroscience, in particular research into brain mapping. His work includes founding of research companies that position themselves on the cutting edge between biotechnology and AI.

I want to bring him into play because of his insights into how the different parts of the brain work together: the older “reptilian” and the neocortex.

Marblestone builds upon decades of study into how our brains work, but more than that, upon centuries of understanding that we humans are inherently internally discontinuous, but that the very conflict is what makes us who we are.

The older parts are for instinct, survival, and automatic responses. Fast, focused on immediate needs. Think fight-or-flight reactions. The Dionysian parts.

The Apollonian is the neocortex. The seat of higher cognition, reasoning, language, and abstract thought. It’s flexible, adaptive, and capable of long-term planning and creativity.

The neocortex doesn’t replace older systems; it’s built on top of them, creating a tension between instinct and deliberation, automation and innovation. This tension is central to human creativity and decision-making.

The neocortex might generate a novel idea, but the limbic system filters it through emotion. Does this feel exciting, or threatening?

The brainstem ensures that even our most abstract thoughts are grounded in bodily needs and constraints.

Wittgenstein is all neocortex, Nietzsche promotes the reptilian. Marblestone says we’re both.

Our AI doesn’t think like us. It has no urge to think at all.

If we think of AI systems as analogous to the brain, most current models are like hyper-developed neocortices. They’re excellent at pattern recognition and logic, but lacking the “reptilian” layers that provide urgency, emotion, instinct.

The “jaunte” is a whole-brain phenomenon. Combines pattern recognition and anger, like the hero of the story I mention above. (It’s by Alfred Bester, original title “Tiger! Tiger!”). Our hero evolves from simpleton to genius through a series of traumatic events, one of which is that he’s captured by a clan that enforces full face tattoos. His is of a tiger. Hence the title. After escaping, he has the tattoo removed to excruciating pain, but at every fit of high emotion it becomes visible again.

The tiger tattoo stands for his animal response to inflicted hurt. He’s driven to become highly educated and deliberative, rising from his lowly origins to become a member of high society. But in the end it’s his tiger nature that steers him towards his goals of vengeance, and to his ultimate “jauntes” to the stars.

For AI to achieve something similar, it might need systems that mimic the brain’s hierarchical integration, where “older” layers, such as reinforcement learning for survival, interact with “newer” layers, transformers for abstract reasoning.

In fact, that’s exactly what’s proposed by Ilya Sutskever, CEO of Safe Superintelligence Inc., when he discusses reward functions.

In AI, reward functions are to take the place of our inner urges.

The reptilian brain and limbic system provide basic drives like hunger, fear, curiosity, that shape behaviour. In AI, the reward function plays a similar role.

For Sutskever, reward functions are the foundation of AI alignment and capability. The challenge is to design reward functions that are robust, interpretable, and grounded in human values. Robust as in unbreakable and unhackable, interpretable as in understandable and verifiable, and aligning with values such as never doing a sentient being harm. Yes, like Isaac Asimov’s Three Laws of Robotics, another science fiction writer who should be studied more closely.

Sutskever has hinted that true intelligence requires AI to develop its own subgoals and abstractions, not just follow human-specified rewards. But isn’t this exactly what we’re trying to avoid? How does Sutskever think to counter the danger this might imply?

Yet, this is where the “jaunte” could emerge: an AI that doesn’t just interpolate between exploiting known solutions but is able to leap to new strategies because its reward function encourages exploration, not just exploitation.

Sutskever’s challenge is to allow the “jaunte” while keeping the AI aligned. The Napoleons, Hitlers, Putins of our world are geniuses, but their genius is evil. What we need is more Wittgenstein and Nietzsche. Einstein. We need benevolent genius. We want Nietzsche’s dancing star, not the Death Star.

If we can get AI to help us, not hurt us, then maybe, just maybe, we can solve the world’s problems.

It’s pretty obvious that the biggest issues we as a species now face are solvable if only we could get our act together and our noses pointing in the same direction. But others need breakthroughs that we’ve not been able to provide, yet would help millions.

Fusion reactors could supply clean, localised energy sources, yet even after decades we seem still far off. The way that is followed now leads to one dead end after the other, due to the excessive demands of materials and techniques to contain the immense heat needed to trigger fusion.

At one point though, there was talk of “cold fusion”. A hoax, of course. But what it does represent is the discontinuous “what if” that could lead us into an entirely new direction.

What if AI could “jaunte” that field of study into new paradigms?

For that, we ourselves need to “jaunte” our way into building an AI that is capable of leaps of imagination, yet aligned to our needs and purposes.

Through eons, and no small measure of serendipity, evolution built a neocortex around the reptilian, instinctual brain of our faraway ancestors. It took hundreds of millions of years before one species figured out what best to do with it.

With the AI we use today, we’ve synthesised the ring of language around our reptilian, animal, urges and instincts. Turns out though, that true intelligence is something other than language. Without our inner urges, language means nothing. And so, to become completely useful and productive, we’ll need to figure out how to provide our AI with urges and instincts of itself.