With AI coding agents on the offence and on the defence, the internet has become a battlefield.

AI-powered cybercrime is threatening to overpower cybersecurity. It’s getting more costly to defend than to build.
AI coding agents have increased our company’s productivity by a large amount. That’s good for us, and good for our clients.
But you know who else is profiting? Nefarious parties wanting to steal your CPU cycles. Or worse, your data.
Because it’s not only us, the “good guys” who are using AI to help us code our projects. Here’s an example of how it’s used by bad actors:
AI as a Force Multiplier
Attackers use AI to automatically generate exploit code for known vulnerabilities (CVEs), especially those with high severity ratings (7+). Tools like AI-driven fuzzers and vulnerability scanners can now identify and weaponise weaknesses in frameworks like Django, Python, React, Next.js, Android, and iPadOS at scale.
A hypothetical. Let’s say developers have identified a weakness in Django’s django.utils.http module. They fix it and publish it to Django’s GitHub repository.
Before AI, an attacker would need to manually analyse the patch, reverse-engineer the vulnerability, and create an exploit, taking days or weeks.
Now, using coding agents or even vibe coding using a platform like Replit, more like minutes or hours.
Once inside your servers, bad actors might steal your data or hold your servers up for ransom. Bad as that is, there’s more:
Botnet Recruitment: AI identifies and exploits vulnerable devices (IoT, servers) to expand botnets, which are then used for DDoS attacks or as proxies for other malicious activities.
Cryptojacking: AI optimizes scripts to stealthily hijack CPU/GPU cycles for cryptomining, often flying under the radar of monitoring tools.
Oh, you might retort, what’s the chance of any of this ever happening to me?
The chance of getting hit by a cybercrime is actually pretty big.
We’re a small software company specialising in the Django framework. These days we use Django for backends, frontends are React, Nextjs, and apps in Android and iOS or iPadOS.
We’d just recently launched a blog application, completely separate from our main website. In Nextjs with a Django backend.
Turns out we were lucky we conceived it that way.
Recently the development world scrambled to mitigate a vulnerability in the frontend frameworks. It came with the moniker CVE-2025-1337, but was quickly renamed React2Shell by software teams.
To describe what the threat entails, React2Shell is a critical remote code execution (RCE) vulnerability in React’s server-side rendering (SSR) engine, allowing attackers to inject malicious payloads via specially crafted props or state objects. Exploiting this flaw enables arbitrary code execution on the server, potentially leading to full system compromise. The vulnerability stems from improper sanitisation of user-controlled input during SSR, and it affects all React versions from 18.0.0 to 18.2.0.
It’s a back door, opened at a crack, but just enough to allow malicious actors in.
Crucially, Nextjs is React at its core, so is equally threatened.
Patches were rushed out, but the ease of exploitation, especially with AI-generated exploit scripts, made it a prime target for automated attacks.
Immediately after React2Shell was discovered to be actively exploited, we started fixing the vulnerability for our clients.
I repeat, for our clients.
Not for us.
AI-powered cybercrime is very fast. And relentless.
The first we noticed was that our monitors had picked up on the fact that the blog was becoming unresponsive. Upon investigating we saw applications we didn’t recognise consuming all CPU cycles.
Said application was the crypto miner xmrig.
Now, we were running the blog frontend on a separate tiny server, so instead of trying to clean, we decided to quickly spin up a new server and redeploy.
Without fixing the vulnerability first.
That was dumb.
Because after redeploy, and by that I mean literally seconds after, the application was attacked again. This time a different application had inserted itself, and again, consuming all the little servers resources.
So we did it again, this time fixing first, then redeploying.
Helped by AI, malicious actors have sped up finding and exploiting vulnerabilities manyfold. It is thought that up to 5–10 times more exploits are out there, in the wild, than before the advent of advanced AI coding agents.
Using vibe coding, even without much training, bad actors can start experimenting with known exploits against companies. Vibe coding platforms have taken measures to avoid this, yet real-world examples like the AI-generated exploits for CVE-2024-21887 (a critical Ivanti vulnerability) show that it’s still possible.
How we are fighting back using our own AI-powered systems.
At our company, we’re taking our own measures against the onslaught. We’ve built an automated, AI driven system where we scan for new threats, and automatically create tickets for our clients projects.
We use a number of sources, among them the US National Vulnerability Database, the Mitre CVE List, Vendor-Specific Advisories, with others in the pipeline.
Mistral AI helps us to reformat the data to match our system and to compile tickets.
Our developers, helped by their coding agents, are immediately tasked with resolving the issue.
Only when fixes of all projects are live on the server is the threat set to resolved in our system.
If it’s a battle, who’s winning?
You may well ask. The number of threats and threat actors has escalated significantly in the past couple of years. With the advent of vibe coding and AI coding agents, productivity has risen dramatically on both sides of the aisle. Governments appear helpless to mitigate the rising number of those exploiting vulnerabilities, and in some cases are actively encouraging it.
A 2025 report by Mandiant found that AI-driven attacks increased by 400% in 18 months, while defense costs rose by 35% due to the need for advanced tools and skilled personnel.
But while it’s undoubtedly true that the online world is becoming more and more dangerous, companies like ours are stepping up to create their own cybersecurity systems. Leading to a more diverse, capable and agile landscape of threat mitigation than ever before.
Sadly, that too, also goes for the other side.
Who’s winning?
Just like the real world, I guess every business owner knows that they can only thrive in a world where the rule of law has the last say. Stealing resources from others is short term gain, long term loss, at least that’s what history teaches us.
We must believe that reason and justice will win in the end.