Is the European Union’s AI Act killing EU Artificial Intelligence in the bud?

Trump wants to rid US AI of all restrictions while China’s state-driven AI is surging ahead. Is the EU the loser? Or THE benchmark for sustainable AI?
The federal government of the USA is trying hard to reproduce the “good old days” of free-for-all capitalism in a recent executive order to overturn laws and regulations imposed on AI by individual states within the Union.
At the turn of the nineteenth to the twentieth century it became clear that the unimpeded growth of industry was becoming increasingly problematic.
Free capitalism had poisoned waterways, poisoned the air, poisoned communities and poisoned society. Individual states within the Union, and states within Europe, took action to tighten the reins on industry. From food labels on the products you buy at your local supermarket to the Fair Debt Collection Practices Act of the United States, consumer protection is a welcome and cherished good.
How governance models are shaping the discussion about AI.
In China, the central state has been a strong force in promoting the development of domestic AI. Promising companies are given access to virtually unlimited capital. AI models such as DeepSeek have shown the world that China can compete head-to-head with the strongest players.
But how is China’s AI regulated by the government? Is there also a divide between the ambitions of the central government and the worries that local governments might have about the wellbeing of their citizens?
This is among the most pressing debates in global AI policy: the balance between innovation and regulation, and how different governance models are shaping the discussion about AI.
And the approaches are indeed quite different. The USA has a laissez-faire approach, the EU is cautious, and China’s AI is state-driven. But in some ways they’re alike.
China listens to their citizens even though we think it doesn’t .
The Chinese government has made AI a national priority, pouring resources into research, infrastructure, and talent. Companies like DeepSeek, Alibaba, and Tencent benefit from massive state backing, but this doesn’t mean there’s no regulation.
The Chinese Data Security Law (2021) and the Personal Information Protection Law (2021) impose strict controls on data handling. Yes, especially for foreign firms. But still.
And there are restrictions on deepfakes, recommendation algorithms, and generative AI content.
Local governments are responsible for enforcing these rules. There’s less public debate or pushback compared to the EU or US, but local authorities do sometimes struggle to balance growth with social harmony.
To give an example, the aggressive rollout of facial recognition in Xinjiang sparked public backlash in other regions. In 2025, authorities were forced to intervene after hotels in Shanghai mandated facial recognition scans for all guests. There was public outrage, to which national government felt it necessary to pass laws defining under which circumstances facial recognition scans were required.
California and New York can be stricter than the European Union’s GDPR and AI Act.
Laws in US states such as California and New York can more than approximate those in the EU. For instance, in California, there is a version of the European Union’s GDPR that’s stricter on personal privacy. Regarding AI, both states are protective of their citizens that sometimes goes even further than similar laws of the EU.
The CCPA (2018) and its expansion, the CPRA (2020), give Californians rights similar to GDPR, such as the right to access, delete, and opt out of the sale of personal data. The CPRA also introduced the right to correct inaccurate data and limits on personal information such as biometric data, race, or religion.
But Californian law goes further. Unlike GDPR, the CPRA applies to businesses globally if they serve California residents, not just those with a physical presence in the state. It also requires opt-out links for data sales on websites, a feature not explicitly mandated by GDPR.
More specifically for AI, California requires companies to conduct annual bias audits for automated decision-making systems, such as for hiring, lending, and advertising. This goes beyond the EU AI Act, which focuses on high-risk systems but doesn’t mandate universal bias audits.
New York City requires employers and employment agencies to audit AI hiring tools for bias before they may use them. The law also mandates public disclosure of audit results, which is more transparent than the EU’s requirements for high-risk AI systems.
New York’s Department of Financial Services banned insurance companies from using AI models that result in unfair discrimination, even if unintentional. The EU AI Act addresses bias but doesn’t explicitly ban discriminatory outcomes in insurance.
California and New York are sometimes more strict than the EU in specific areas. As shown above, especially in bias audits, hiring transparency, and biometric privacy. Their laws often serve as models for other US states and even influence global tech policy, much like GDPR did.
From our vantage point in Europe, we tend to look to countries like the US and China as one entity. But local governments such as those of US states are more in touch with citizens than central government, and, with the next elections in mind or otherwise, tend to the grievances of their people to a higher degree than one might think.
The worst-case scenario is when capital gets free rein.
Let’s take a look at dystopia. When AI is allowed to develop unfettered. When there is no law to stop it poisoning the minds of children, where entire generations grow up under its spell.
Here are some of the most chilling warnings from philosophers, technologists, and storytellers about the dangers of unchecked AI and capitalism.
AI-driven platforms, optimised for engagement, could use developmental psychology to rewire children’s brains, prioritising addiction, consumerism, and ideological extremism over critical thinking or empathy.
Imagine social media, games, and “educational” tools designed to maximise screen time, data extraction, and loyalty to corporate or political agendas. Sound familiar?
Children would grow up unable to distinguish between real events and synthetic propaganda, eroding trust in institutions, science, and even no longer trusting their own experience.
AI could create unique, evolving realities for each child, tailored to keep them hooked. Virtual worlds, ads, and even “friends” (AI chatbots) would be designed to maximise emotional dependence on platforms, leaving them vulnerable to manipulation by the highest bidder.
Corporations and governments could track, predict, and influence children’s behaviour from birth, creating lifelong consumer or citizen profiles. The concept of a “private thought” could become obsolete. It’s the world of Tom Cruise in the movie Minority Report.
Real world example: China’s social credit system, but automated and applied to children, where their access to education, jobs, or even social circles is determined by AI assessments of their “value”.
I think we all realise that this is not the world we want for our children. Trump probably doesn’t care, but we do, parents doing what it takes to raise our families. In the EU, we have enacted laws and regulations to ensure the safety of consumers, but they are sometimes a hindrance to the development of industry.
How the European Union tries to find a balance between consumer protection and industry innovation.
The EU’s philosophy behind balancing consumer protection and industry innovation prioritises human rights, transparency, and long-term trust over short-term corporate gains.
The EU AI act strives to navigate the balance between safety and innovation by a system of risk categories.
Minimal risk applications have no rules. Most AI applications (e.g., spam filters, video games) face no additional regulation, allowing innovation to flourish.
Limited risk applications need to be transparent about their nature. AI like chatbots or deepfakes must disclose that they are AI-generated, ensuring users are not deceived.
High risk means strict rules. AI used in critical areas like hiring, law enforcement, or education must meet rigorous standards for transparency, accuracy, and human oversight. Companies must conduct risk assessments and provide clear documentation.
Unacceptable risk applications are banned. AI systems that threaten fundamental rights, such as social scoring by governments or manipulative toys that exploit children’s vulnerabilities, are outright prohibited.
Who decides the risk category?
In most cases it’s self-assessment by providers. There are templates and standard forms available that make applying a quick process. However for high risk applications the process is longer, up to months, while the relevant EU departments scrutinise the product. And of course unacceptable risk applications are banned. Bringing them to market within the EU carries severe punishment, with fines of millions or even billions.
Sandboxes for innovation
Still, there must be some freedom allowed for research into products or processes for which the risk category is unknown, simply because there is yet no product to assess. For that, there’s the regulatory sandbox.
Companies can develop and refine high-risk AI systems in a real-world setting while working with regulators to ensure compliance. Works both ways, because regulators gain knowledge of what’s possible and the risks attached from within.
For companies with an idea, innovation hubs offer startups with resources, expertise, and funding to develop AI responsibly, ensuring smaller players aren’t left behind by compliance costs.
Funding and Support for Ethical AI
The EU also invests in AI research and development, with more than a €100 billion earmarked for AI projects in healthcare, climate, and public sector applications.
Initiatives like the AI, Data, and Robotics Partnership bring together industry, academia, and policymakers to co-develop standards and best practices.
An example is Adra-e, a public-private partnership that brings together industry leaders like Siemens, Philips and SAP with universities and research centres such as Fraunhofer, developing quantum systems, the French National Institute for Research in Digital Science and Technology, and the TU Munich.
The organisation works with European standards bodies to create technical standards for AI and robotics, ensuring interoperability and compliance with the EU AI Act.
But it also provides frameworks for responsible AI, such as tools to assess bias, transparency, and accountability in AI systems.
A real-world example is "AI for Health", helping the adoption of AI in healthcare while ensuring patient safety, data privacy, and ethical use.
The project TEF-Health provides a sandbox environment where companies and researchers can test AI-driven medical devices like robotic surgery assistants under real-world conditions. Results in faster, safer deployment of AI in hospitals, with shared datasets, validation protocols, and regulatory guidance.
Projects like this try to balance the interests of industry, academia and policy makers. Industry gets a seat at the table so that they can direct their investments, academia ensures that scientific rigour and ethics are baked into AI development, and politicians gain real-world feedback to refine laws like the AI Act, making them more effective and less burdensome.
Short-Term Costs for Long-Term Gains
Yes, some companies argue that EU rules slow them down compared to the US or China. But the EU’s bet is that sustainable, ethical AI will ultimately win. By avoiding backlash, fostering public trust, and creating a stable environment for growth.
Critics say: The EU risks falling behind in the global AI race.
Supporters argue: The EU is building an AI ecosystem that lasts, avoiding the pitfalls of unchecked capitalism.
The EU’s AI Act could become global benchmarks (the “Brussels Effect”), much like GDPR is for data privacy.
Companies like Microsoft and Google have already adapted their global AI policies to comply with the EU AI Act, even before it’s fully enforced.
The EU bets that trustworthy AI will be a market differentiator.
By emphasising privacy, safety, and fairness, European companies can attract consumers and partners who value ethics over unchecked innovation.
The USA is on a crash course with reality on many fronts. Looks like AI will be one of them if they are allowed to develop, unfettered by ethics. And as China’s population grows more vocal, the national government will start to listen more, as they have already proved to be capable of.
The EU is a strong market of 450 million and a force to be reckoned with. We think to set a benchmark that ascertains the good of all above that of a chosen few to be the way forward.