I almost forget how much AI has already changed our work.

The stock markets might be cooling, but we’re just getting warmed up.
Using AI agents to write code feels so natural. It’s as if we’ve always done it this way.
“Global stock markets fall sharply over AI bubble fears”, says The Guardian. “Jamie Dimon, the head of the US’s largest bank, JP Morgan Chase, [warned] markets would crash in the next six months to two years.”
I understand nothing about stocks but even I realise that giving such a wide range is not a prediction. And the head of the world’s largest bank calls themselves Jamie? What, are you twelve?
Point is, we don’t care. We use AI for our benefit, our business, our day-to-day.
And it just feels so natural. Some of our guys use Cursor, personally, I use Warp, Doesn’t matter. None of us actually writes code anymore.
And it’s changed us. Changed the way we perceive our trade. We can no longer eke ounces of pride from our knowledge of syntax. How much time have we wasted, anguishing over errors? All gone, in an instant.
The role of software developer has morphed from wordsmith to product owner in the space of six months.
But it comes at a cost. In the beginning, there was the fatigue. Vibe coding (does anyone still say that? It’s sooooo 2025 Q1) was very tiring, we heard some devs say. Me, I can’t quite replicate.
Then there was the frustration. AI caught in a loop. Stuck in some minor itch even a junior could fix. Somehow we got over that.
We. I always say we. My AI insists on saying that it will “help” me code, but then goes ahead and does it themselves. So I’ve gotten into the habit of saying “we need….“.
“We” now get into different kinds of trouble. “We” always start from scratch, “we” have no recollection of what we did yesterday, heck, five minutes ago. Such a bad memory.
My AI, the AI that millions use, is a very specific approach to Artificial Intelligence. It’s also quite a superficial approach, if you think about it. In a field of science that has promised so much and failed so often, the Large Language Models that underpin ChatGPT take a roundabout route: if intelligence is expressed in language, it’s language we should be processing. If we process language just right, we should be able to produce intelligence.
But from the depths of my struggles with My AI’s catastrophic memory loss, I just don’t see it.
I recently spoke about other approaches: Jeff Hawkins’ “A Thousand Brains”, STC-AOG proposed by Song-Chun Zhu. Approaches that are diametrically opposed to the LLM-based models. That look more promising in that they approximate human memory and intelligence far more than models that are basically just giant language prediction machines.
But none are mainstream. Most barely beyond the research phase.
In the meantime, we use the tools available to our advantage. We’ve settled into the new paradigm. In a few months, we won’t be using the term “AI” anymore. It’ll be just us, building stuff.
And the stock markets? Up or down, we don’t own shares so we don’t care.