What ever happened to Jeff Hawkins’ version of AI?

The approach is fundamentally different from LLMs but still in the research phase after 20 years.
Jeff Hawkins is perhaps best known for producing the forerunners of the smartphones we all own. You might remember the name Palm Pilot.
In 2005 he and others started the company Numenta, which is researching AI systems quite different from the LLM-based AI that is now mainstream.
In his book “A Thousand Brains”, that he wrote together with the renowned neuroscientist Richard Dawkins, the approach is explained in understandable terms. It’s inspired by the part of the human brain called the neocortex, which is basically the wavy top layer of the brain we see in pictures.
The neocortex consists of not thousands but millions of repeating units called cortical columns, each 2mm tall and 0.5mm wide, each of which processes a tiny bit of the world. Importantly, they work in parallel with other cortical columns to produce a picture of the world as it could be in a few seconds.
It’s a little like how the pixels in a video work together to produce the sensation of movement. But while the video is prerecorded, the picture the cortical columns produce predicts the future. It does that by encoding multidimensional, multimodal information, including touch, sound, spatial relationships, even abstract concepts, all for its own little patch of the world.
Just like the pixel is not viewed in isolation, the “picture of the world“ is produced in context of the cortical column’s neighbours. Together they produce a dynamic network of knowledge.
Take the example of the coffee cup.
Imagine pixels that know their 3D location relative to your body and tiny bits of the cup. Can predict how they should change if you move or interact. Only activate when that bit of the cup enters their "field of view". Send signals to "motor pixels" to reach out and grab it.
Numenta aims to reproduce our brain’s system of cortical columns in code. It’s approach is fundamentally different from the LLM based, big data systems that have gone mainstream in the last few years.
The AI we use is statistical AI. It uses probabilistic methods to string together words that are most likely to appear as a result of a given prompt. To achieve that, the complete set of learning data is transversed. A single query costs a lot of energy. ChatGPT consumes as much daily as 15 thousand households, and that’s not counting training.
Numenta’s approach however, allows for sparse computing. Just as in the human brain, only a small fraction of the synthetic cortical columns are fired. The energy use is a tiny fraction of what an LLM would need: 100 to 1000 times less.
Best of all, it can operate on general purpose CPUs, like the ones in drones.
For the first time, AI is truly useful. In our software company, it has changed our work profoundly. Not only the speed at which we can produce solutions, but it's also changed the very notion of what it means to be a developer.
But we also see downsides. Like catastrophic forgetting, starting from the very beginning at every session.
Numenta’s approach is already helping statistical AI overcome such shortcomings. In a recent project, it helped the deep learning model BERT to reduce costs and power consumption. Their Numenta Anomaly Benchmark is a widely recognised tool in the space of anomaly detection in servers and finance.
We’re just at the very beginning of the AI revolution. What we currently have is amazing, even more so given how fast it has entered everyday life. But that’s just one approach. It’s most likely that other approaches will gain traction, merge. Like that of Numenta.