DiPLO
The gap between AI rules and AI reality

Why the two biggest AI regulatory efforts are both struggling, and what that means for everyone else?

Artificial intelligence governance has no shortage of ambition. Treaties are being negotiated, national strategies published, and international summits, such as the AI Impact Summit held in India in early 2026, are occurring more frequently. However, two of the most important regulatory efforts underway right now, one in Europe, one in the United States, are both running into serious trouble. Their challenge lies in the gap between writing rules and making them work in practice, which turns out to be wider than expected. For anyone watching these two models as potential templates for their own AI governance, that tension is becoming increasingly noticeable.

In an earlier piece, we looked at how the EU AI Act was structured and what it was supposed to do in practice, the risk tiers, the obligations on providers and deployers, and the enforcement chain. This piece picks up where that one left off: at the point where implementation was supposed to begin and things started to go wrong.

Diplo Academy header banner 2560x400px

 

The EU: when the infrastructure isn’t ready

The EU AI Act entered into force in August 2024 and was widely seen as the most comprehensive attempt anywhere to regulate AI through binding law. Its ambition was real: a risk-based framework covering everything from prohibited applications to transparency requirements for general-purpose models, with concrete obligations on providers, deployers, and national authorities.

The first serious obligations for high-risk AI systems were due to apply in August 2026. They will not. In November 2025, the European Commission proposed the Digital Omnibus package, which includes a delay of up to 16 months for those obligations, pushing the deadline to December 2027 at the earliest. The Commission framed this as simplification and regulatory de-cluttering. Critics called it something closer to retreat.

The reasons behind the delay are worth naming precisely, because they point to institutional failure rather than a deliberate change of direction. The two standardization bodies responsible for developing the technical standards that companies would use to demonstrate compliance with high-risk requirements missed their 2025 deadline. Without those standards, the August 2026 compliance date would have required companies to meet obligations with no agreed method for doing so.

 Art, Graphics, Flare, Light, Nature, Outdoors, Sky, Logo, Computer, Electronics, Pc

 

At the same time, many EU member states missed their own August 2025 deadline to designate the national competent authorities responsible for enforcement. You cannot enforce rules without enforcers. And in February 2026, the Commission itself missed a deadline to publish guidance on Article 6 of the Act, the provision that determines whether an AI system counts as high-risk in the first place, guidance that was required by law and that companies had been waiting on for months.

These were not external disruptions. They were failures by the bodies specifically designed to make the Act work. The Commission had promised as recently as last summer that it would hold firm on timelines. By autumn, it had changed course.

The delay does not mean the EU AI Act is dead. The overall course still points toward enforcement, and some obligations, including those covering general-purpose AI models, are proceeding on their original schedule. But the high-risk provisions are the core of what makes the Act consequential for most practical applications: AI used in hiring, credit decisions, education, border management, and welfare systems. A 16-month delay in that area is not a minor administrative adjustment. It is a signal that the gap between regulatory ambition and regulatory capacity is real, and that closing it will take longer and require more institutional preparation than the Act’s architects anticipated.

The US: when regulation is dismantled before it matures

The United States has no federal AI law. Congress has debated the issue for years without producing comprehensive legislation, leaving a vacuum that individual states have been filling with their own laws. By the end of 2025, more than 28 states had passed some form of AI-related legislation.

Two of those laws stand out. California’s Transparency in Frontier Artificial Intelligence Act, known as SB 53, was signed by Governor Newsom in September 2025 and took effect on January 1, 2026. It is the first enforceable US regulatory system specifically targeting the most powerful AI models, those trained above a certain computational threshold. It requires developers to publish their safety protocols, report critical incidents to state authorities, and protect employees who raise safety concerns. New York followed with its Responsible AI Safety and Education Act, the RAISE Act, signed into law in December 2025 and finalised in its amended form in March 2026, with an effective date of January 1, 2027. Together, these two laws represent the most serious attempt yet in the US to put real obligations on frontier AI developers.

 Face, Head, Person, Photography, Portrait, Adult, Female, Woman, Advertisement, Text, Poster, Happy, Smile, Joshua Leonard
ESCAA co-sponsored the bill, advocating for California’s leadership in responsible AI

They were passed, and are now being challenged, in the same breath. In December 2025, President Trump signed an executive order directing the Department of Justice to establish an AI Litigation Task Force with the explicit purpose of challenging state AI laws that conflict with a federal policy of minimal regulation. The order also directed the Department of Commerce to identify state laws considered burdensome and authorized the threat of federal funding cuts for states that maintain them. New York signed its law eight days after that executive order. It remains to be seen if California and New York face legal challenges.

The stated justification is reasonable: a tangle of 50 different state regulatory regimes creates compliance burdens, particularly for smaller companies, and risks embedding conflicting requirements into AI development. That argument makes sense. The problem is what it leaves out. Replacing a regulatory patchwork with a coherent federal alternative would address that concern. What is actually happening is different: state regulation is being challenged and suppressed while no federal alternative exists or is close to existing. The result is not a cleaner regulatory environment. It is a regulatory vacuum, defended as a feature rather than acknowledged as a problem.

The same gap, two different causes

These two stories look different on the surface. One is about a carefully constructed legal framework that ran into implementation problems. The other is about emerging state-level regulation being actively removed by federal authority. But they share a common thread: in both cases, the distance between where AI governance is supposed to be and where it actually is has grown wider in recent months, and the bodies responsible for reducing it have contributed to extending it.

In the EU, the failure is largely institutional and technical. The organizations tasked with building the compliance infrastructure (standards bodies, national authorities, the Commission itself) did not deliver on schedule. There is no obvious bad faith here, but there is a clear lesson: passing a comprehensive law is not the same as being ready to enforce it, and the preparation required is more demanding than the legislative process alone can provide.

In the US, the failure is political. The federal government has had years to build a national framework and has not done so. States that filled the gap with real legislation are now being told their efforts are unwelcome, without being offered anything in their place. The transparency and safety requirements in SB 53 and the RAISE Act are not radical. They are disclosure obligations and incident-reporting requirements of the kind found in financial services and healthcare. The argument against them is not that they are wrong in principle, but that they are inconvenient in practice for companies that prefer no obligations at all.

 Nature, Outdoors, Sky, Scenery, Art, Light, Night, Modern Art, Accessories, Purple, Landscape

 

What this means for everyone else

For countries and institutions outside the US and EU, both of these developments matter in ways that go beyond the immediate regulatory outcomes.

The EU AI Act has functioned as a reference point for AI governance discussions in many parts of the world, much as the GDPR shaped data protection frameworks far beyond Europe’s borders. Countries drafting their own AI legislation, negotiating bilateral agreements, or setting procurement standards have looked to the Act as evidence that comprehensive, binding regulation is achievable. A visible implementation failure, even a temporary one, weakens that reference. It gives more weight to the argument that lighter-touch approaches are more realistic, and reduces the pressure on governments that prefer less demanding standards.

The US situation sends a different but related signal. If even state-level transparency requirements for the most powerful AI systems can be challenged and removed by federal pressure, the prospect of meaningful binding regulation at the national level looks remote. For countries in bilateral negotiations with US technology firms, or receiving AI systems through development assistance programs, this matters directly: the governance expectations embedded in those relationships are shaped in part by what the US itself is willing to require of its own industry.

Neither of these observations leads to an easy conclusion. The EU’s difficulties are real but likely temporary. The Act remains in place, and enforcement will come, later than planned. The US situation is harder to read, because it depends on political choices that could shift. What is clear is that 2026 was supposed to be a year when AI governance moved from paper to practice in meaningful ways, and in both major regulatory jurisdictions, that transition has been slower and more contested than anticipated.

If the gap between AI rules and AI reality continues to widen in both the EU and the US, 2026 will not mark the start of meaningful AI governance but the consolidation of a global status quo where the most powerful systems operate with only delayed, contested, or minimal oversight. For countries and institutions outside these jurisdictions, that status quo will be even harder to challenge, because the very models they looked to as proof that comprehensive regulation is possible will instead carry the message that governance is easier to announce than to implement.

In this context, the question is no longer whether binding rules are desirable; most serious actors agree they are. The real test is whether the diplomatic and governance communities will treat the current delays and reversals as excuses to retreat to lighter-touch approaches, or as concrete lessons about how to build readiness, interoperability, and enforcement before the next wave of AI-driven systems becomes even more deeply embedded in hiring, policing, education, welfare, and diplomacy itself.

If the current regulatory attempts are interpreted as evidence that binding AI regulation is unrealistic, the default position will quietly shift toward self-regulation, voluntary codes, and fragmented national compromises. That may feel more politically convenient, but it risks leaving the world’s most consequential AI systems shaped by commercial logic and battlefield-style lobbying, rather than by the kinds of transparent, accountable, and rights-respecting frameworks that citizens and states claim to want. How the next wave of negotiations, summits, and bilateral deals responds to that risk will, in practice, define what AI governance actually looks like when it finally ‘works’, or whether it remains stuck in the space between paper and practice.