Navigating the EU AI Act: A Risk-Based Approach to AI Governance

The EU AI Act, which entered into force on August 1, 2024, represents a pioneering comprehensive framework for regulating artificial intelligence across the European Union, emphasizing a human-centric approach to mitigate risks while fostering technological advancement. Unlike the United States, which leans heavily on voluntary frameworks and a patchwork of state rules, the EU has gone for a single, comprehensive law that sets out clear obligations for anyone building or deploying AI within its borders. The goal is to strike a balance between encouraging innovation and keeping people safe, protecting rights, and ensuring that AI reflects European values.

As of September 2025, the Act is starting to take fuller shape. One of the biggest updates is that rules for general-purpose AI models, such as large language models and versatile chatbots, began applying on August 2, 2025. To make things smoother, the European Commission issued detailed guidelines in July explaining how these rules should work in practice. General-purpose models are treated with special attention because they can be adapted to so many uses, some of which carry systemic risks.

At the heart of the AI Act is a tiered system that classifies AI based on risk. It divides systems into four categories: unacceptable, high, limited, and minimal risk. Each category comes with different obligations or bans proportionate to the level of potential harm.

The highest bar is “unacceptable risk.” This applies to AI systems that go against core EU principles, such as tools that manipulate people’s behavior in harmful ways, exploit vulnerable groups like children, or enable governments to conduct social scoring that could lead to discrimination. These systems are banned outright, with no wiggle room for most cases. The thinking here is to prevent AI from undermining civil liberties, even if that means sacrificing certain business opportunities.

Just below that is “high risk,” which covers AI used in sensitive areas such as critical infrastructure, hiring processes, education, and law enforcement. Real-time facial recognition in public spaces is a prime example, permitted only in very narrow circumstances like investigating serious crimes. Companies offering high-risk AI must conduct risk assessments, ensure their data is high quality and unbiased, build human oversight into their systems, and undergo conformity checks before launching. This means more paperwork, audits, and testing, but the aim is to ensure reliability in contexts where mistakes could have major consequences.

The next tier is “limited risk.” These are AI systems that people interact with directly, like chatbots, emotion recognition tools, or generators of synthetic content such as deepfakes. Here the requirements are mostly about transparency. Users should know when they’re talking to or seeing something created by AI. In practice, that might mean a chatbot that says “I’m an AI” at the start of a conversation or metadata that labels a photo as AI-generated. The emphasis here is on honesty, so people are not misled.

Finally, there’s “minimal risk,” which actually makes up the vast majority of AI we use day to day. Things like spam filters in email, AI opponents in video games, or navigation apps that suggest the quickest route. Because these applications are seen as posing little to no risk to people’s rights or safety, the Act does not impose binding obligations on their providers. Instead, it encourages the use of voluntary codes of conduct and best practices, leaving flexibility for innovation while still promoting responsible development.

Taken together, the EU AI Act is the world’s first attempt to create a sweeping, legally binding framework for AI. It combines strict bans with flexible requirements, depending on the level of risk, and extends its reach to general-purpose AI models that sit at the core of today’s technological ecosystem. While this approach could slow innovation in certain sectors compared to looser U.S. rules, Europe is betting that setting high standards now will pay off later, both in terms of building trust and in shaping the global conversation around responsible AI.

EU AI Act: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

Next
Next

The US Approach to AI Regulation: Deregulation and Innovation