How robotics, AI and evolution are intertwined.

Understanding evolution is core to understanding where we are in AI and robotics. Which insights would help our company improve its AI powered ticketing system?
At our small software company, we use AI at scale. Agents take on tickets from our ticketing system, do them, commit and push, comment the ticket and set it to the appropriate status.
To handle tickets, we’ve set global rules, and the agents have created functions that enable them to process tickets: functions to read them, functions to add comments, etc.
The problem however is that the system is quite brittle. It depends on one system (the one I use, Warp) rules are devised ad-hoc, and are global just for me and my system, not for my colleagues.
I’ve read about orchestration systems such as Gas Town, but that kind of all-encompassing system seems too much for our needs.
But maybe the answer is sitting right under our nose.
If you look closely at what the coding agent is doing, you might notice that when they fail, for example when they encounter an unexpected status, they’ll fall back to python shell to handle the issue. These one-off sequences can get pretty complicated, I’ve seen many that exceed 50 lines of code.
You may see the remote API as an encounter with the real world and the coding agent's response as a knee-jerk reaction.
It occurred to me that this is exactly what the relationship is between robotics and AI. And it just might help us improve our system.
Let’s scroll back a bit.
Within the history of AI, there are a lot of ups and downs. Many so-called AI winters, where funding dried up because initial enthusiasm got downed by reality. Same for robotics, it’s remained promising since the fifties.
It’s only now that both areas are to be taken seriously. AI, of course, took off after scientists realised the importance of scale: by exposing massive datasets to neural networks and enabling learning algorithms that could conjure up human-like useful responses to user questions.
Initial AI and initial robotics had this in common: they were rule-based, and crucially, the rules were written by humans. Slow, pathetically slow, exceedingly expensive, were those humans.
Now that AI can write its own rules, the game has changed. The example I gave shows that when AI encounters reality, albeit a reality in par with its own ephemeral existence as a software program, it can respond to the shape and structure of that other entity, namely the API it encounters.
For me, this seems fundamental because it looks exactly like those videos of microscopic creatures that respond to their environment.
Robotics is the art of processing sensory information in real time.
I was going to start this sentence with “Just like an AI encountering a remote API…”. But of course, robotics is ridiculously more complicated. Every attribute of an object in the real world is an API: it’s temperature, it’s moisture, it’s rigidity, it’s viscosity, size. Every adjective you’ve ever heard of is a property or method of the “API of things”.
But here’s the thing. Were my AI coding agent to respond to the “API of things“ the way it responds to our ticketing system, it would be dead in seconds. My AI is much too slow. By the time its first line is written it’ll be overwhelmed.
So how does modern robotics combine AI and sensory data to survive encounters with the real world?
A big challenge is latency. Robots use specialised hardware such as FPGAs to process sensory data locally, at the spot where the interaction with the real world takes place: when they touch, at the joints, when light enters sensors. My AI’s 50-line Python shell response would be too slow; robots need sub-millisecond reactions.
But FPGAs are static, preprogrammed sensory devices.
One of the key issues in robotics is how to handle novel occurrences. In animals, response to external (and internal) factors is regulated by systems that have developed across millions of years.
Some responses, such as retracting when coming into contact with a hot surface, can be thought of as hard-coded and unchanging, because temperature has been around for all time, but other responses change with changes in environment. You may think that FPGAs could play this role in robotics, but I understand that reprogramming is a slow, deliberate process, much slower indeed than an AI coding agent responding to a novel response from a remote API.
Modern robotics increasingly combines FPGAs (and other controllers) for hard-coded reflexes, with AI for adaptive responses:
FPGA/AI Pipeline:
FPGA pre-processes sensory data (e.g., filtering noise, extracting features).
AI interprets the processed data and makes decisions (e.g., "Is this object a threat?").
FPGA executes low-level actions (e.g., "Move arm to avoid collision").
Example: Boston Dynamics’ robots use a mix of pre-programmed balance algorithms (FPGA-like) and AI for navigation and object manipulation.
FPGAs and AI are complementary. The future of robotics lies in layered architectures that combine the speed of FPGAs with the adaptability of AI.
How the debunked triune brain model could help us think about our company’s AI-powered ticketing.
Let’s bring in a comparison between neuroscience, robotics, and AI, where “old brain” functions are fast, preprogrammed responses to environmental changes, and the neocortex fulfils the role of slow, rational thinking.
The triune model was first proposed in the 1960s by the physician and neuroscientist Paul D. MacLean. That’s a long time for a scientific model to survive critics, and indeed it hasn’t. The model is now considered to be overly simplistic and of little use as a basis for further research into neuroscience.
However, for the purposes of devising a model that could help us improve our use of AI coding agents within our company it might just prove helpful.
The model might seem familiar, as it’s been widely cited in popular media. It divides the brain into these three layers:
Reptilian Brain (Basal Ganglia/Brainstem): Fast, automatic, survival-driven, fight-or-flight, balance.
Limbic System (Paleomammalian): Emotions, memory, and basic social behaviors.
Neocortex: Rational thought, planning, language, and abstraction.
Mapped to robotics/AI, we get this:
🦎 The reptilian brain maps to FPGAs and other embedded systems for hard-coded, low-latency reflexes, example: emergency stop in a robotic arm But developing these systems is a slow, methodic process. Millions of years of evolution for us humans, it's faster to reprogram FPGAs but even so changes to these components are very time consuming and challenging.
🦾 The limbic system maps to classic control algorithms that deliver pre-programmed logic for known scenarios, for example, controllers for motor speed. Can be compared with traditional software development, changes can be done much faster than the reptilian brain but still just functions with arguments, unable to adapt outside of their predefined use case.
💭 Neocortex: the brain’s AI, adaptive, slow, handles novelty and complex decision-making. An example would be AI planning a robot’s path in a dynamic warehouse. This is the AI coding agent in today’s software world.
Our workflow as a triune system.
Now let’s improve our system based on what we’ve just learned.
The reptilian brain maps to Jira API rules and automated triggers, instant actions such as “auto-assign high-priority tickets”. These processes exist completely outside of our automated ticketing system in that we use existing features delivered by third party applications.
The limbic system then maps to celery tasks in our automation system, for example context-aware routing (e.g., “this ticket is from client x, use template y”). In traditional software this would comprise middleware, capturing a request before it gets sent to the application proper. Of course, to create this middleware, we’d use AI, but once in place it would act just like regular software.
The neocortex is then the AI coding agent, adaptive problem-solving, doing tickets and fixing issues. Generating 50-line python fixes for edge cases.
At every step our triune system becomes more aware of the ticket's content. Crucially, moving forward, we’re using AI only for the actual work of resolving the issue, instead of losing cycles on handling intermediate steps, steps that could easily be done with outside systems (reptilian), or traditional software (limbic).
Small software companies like ours are breaking down barriers all over town.
For a small software company like ours, using AI coding agents is a challenge and an opportunity. So many ideas, and so little time! But the advantages are so obvious that we feel we must move forward on them.
Luckily, because of our small size, we can decide and act quickly. We don’t have the hierarchy larger companies often have, where good ideas can get stuffed under the rug because some manager feels threatened by them.
Structure is important, but so is discovery. That’s why we look outside of the tech paradigm for inspiration. Today it’s evolution and neuroscience, tomorrow some other subject from the humanities that might offer insights in how to improve our workflow so we can help our clients even more.
Header photo: Wolfgang Hasselmann on Unsplash