Why AI Regulation Is Hard to Get Right
Artificial intelligence presents regulators with a genuinely novel challenge. Unlike previous technologies that were relatively stable before widespread deployment, AI systems are rapidly evolving, context-dependent, and capable of being applied across virtually every sector of society simultaneously. There is no single AI to regulate — there are thousands of applications, risk levels, and deployment contexts that existing legal frameworks were never designed to address.
Yet the pressure to regulate is real and growing. AI systems have already produced demonstrable harms — biased hiring algorithms, manipulative content recommendation engines, deepfake-enabled fraud, and privacy violations at scale. Governments around the world are responding, but they are doing so in very different ways.
The European Union: A Risk-Based Regulatory Framework
The EU's AI Act, which entered into force in 2024, represents the world's most comprehensive binding AI regulation to date. Its core innovation is a risk-based tiered approach:
- Unacceptable risk — certain AI applications are banned outright, including social scoring systems by governments and most forms of real-time biometric surveillance in public spaces.
- High risk — AI systems used in critical infrastructure, education, employment, essential services, law enforcement, and border control must meet strict requirements for transparency, human oversight, and accuracy before deployment.
- Limited and minimal risk — lighter disclosure requirements apply to applications like chatbots, with most low-risk AI left largely unregulated.
The AI Act also imposes specific obligations on providers of general-purpose AI models above certain capability thresholds, including transparency about training data and adversarial testing for systemic risks. Critics argue the framework is complex and may disadvantage European AI developers relative to less-regulated competitors.
The United States: Sectoral and Voluntary Approaches
The US has taken a markedly different path — favoring sector-specific oversight through existing agencies (the FDA for AI in healthcare, the EEOC for employment applications, the FTC for consumer protection) combined with voluntary commitments from major AI developers. Executive orders have directed federal agencies to develop guidance and standards, but there is as yet no comprehensive federal AI law equivalent to the EU's Act.
Proponents of this approach argue that flexibility allows American companies to innovate without the compliance burden that prescriptive rules impose. Critics contend that voluntary commitments are unenforceable and that the patchwork of sectoral rules leaves significant gaps, particularly for general-purpose AI applications that span multiple domains.
China: Innovation Under State Direction
China has adopted a distinctive approach that combines targeted regulation of specific AI applications with strong state direction of AI development as a strategic national priority. Regulations governing algorithmic recommendation systems and generative AI have been introduced, focusing particularly on content controls and the requirement that AI outputs align with "core socialist values." Foreign AI companies face significant barriers, and Chinese developers operate within a regulatory environment shaped as much by industrial policy as by consumer protection concerns.
Comparing the Frameworks
| Jurisdiction | Approach | Key Strength | Key Criticism |
|---|---|---|---|
| European Union | Comprehensive, risk-tiered law | Legal clarity and rights protection | Complexity; potential innovation drag |
| United States | Sectoral + voluntary | Flexibility; innovation-friendly | Enforcement gaps; no unified standard |
| China | Targeted rules + state direction | Rapid deployment of national priorities | Limited civil liberties protections |
| United Kingdom | Principles-based, pro-innovation | Adaptability; sector expertise | Risk of regulatory gaps |
What the Divergence Means
The lack of international alignment on AI regulation creates real-world consequences. Companies operating globally face a compliance patchwork. AI systems trained and deployed under different regulatory regimes may have different safety properties. And regulatory arbitrage — developing AI in low-regulation environments and deploying it in higher-regulation ones — remains a structural challenge that no single jurisdiction can solve alone.
International coordination through bodies like the OECD, G7, and the UN's AI advisory body is advancing, but binding global standards remain distant. For now, the regulatory landscape will remain fragmented — and the choices governments make today will shape what AI looks like for decades to come.