EU Council Adopts Position on Streamlining the AI Act: Regulatory Recalibration or Strategic Retreat?
On 13 March 2026, the EU Council agreed its negotiating mandate on the Digital Omnibus on AI — a targeted amendment to the AI Act that extends high-risk compliance deadlines, introduces a new prohibition on AI-generated non-consensual sexual content, and recalibrates governance structures. This analysis examines the institutional drivers, legal mechanisms, and geopolitical implications of the EU's first major post-enactment revision of its flagship AI regulation.
Key Takeaways
Key takeaways: (1) The EU Council's Omnibus VII mandate delays high-risk AI compliance by up to 16 months — standalone systems now face a 2 December 2027 deadline, embedded systems 2 August 2028 — reflecting readiness gaps in harmonised standards. (2) A new blanket prohibition on AI-generated non-consensual sexual and intimate content marks the first post-enactment expansion of the AI Act's banned practices list. (3) The reform extends SME-style regulatory relief to small mid-cap companies, defers regulatory sandbox establishment to December 2027, and reframes AI literacy from a mandatory operator obligation to a state-encouraged initiative — signaling a measurable shift toward industry competitiveness over regulatory stringency.
On 13 March 2026, the Council of the European Union adopted its negotiating mandate on the Digital Omnibus on AI — a targeted regulatory instrument amending Regulation (EU) 2024/1689, commonly known as the EU AI Act. The decision, reached under the Cypriot presidency, represents the first substantive post-enactment revision of the world's most comprehensive artificial intelligence legislation, less than two years after its entry into force on 1 August 2024. The move arrives at a critical juncture: with the AI Act's most significant obligations for high-risk systems originally set to apply from 2 August 2026, the Council has effectively recalibrated the regulatory timeline, signaling a pragmatic acknowledgment that the pace of legislative ambition has outstripped the institutional infrastructure required for implementation [1].
Institutional Origins: From the Draghi Report to Omnibus VII
The Digital Omnibus on AI did not emerge in a regulatory vacuum. Its genesis can be traced to two seminal policy documents published in 2024: Enrico Letta's report 'Much More Than a Market' on deepening the Single Market, and Mario Draghi's landmark study 'The Future of European Competitiveness,' both of which identified regulatory complexity as a structural impediment to European industrial dynamism. The Budapest Declaration of 8 November 2024 crystallized these diagnoses into a political imperative, calling for 'a simplification revolution' — a term that has since become the organizing principle behind the Commission's ten Omnibus packages.
The Commission published its seventh omnibus package on 19 November 2025, comprising two parallel proposals: one addressing the broader digital legislative framework (encompassing GDPR, the Data Act, and the Cyber Resilience Act), and one specifically targeting AI Act implementation. This bifurcated architecture reflects an important institutional reality — the AI Act, despite its singular prominence, operates within a dense web of intersecting EU digital regulations, and its simplification cannot proceed in isolation from the broader acquis [1].
Key Provisions of the Council Mandate
Revised Compliance Timelines for High-Risk Systems
Perhaps the most consequential element of the Council's position is the introduction of fixed revised deadlines for the application of high-risk AI requirements. The Commission's original proposal had tied compliance dates to the availability of harmonised European standards — a 'readiness-contingent' approach that would allow up to 16 months of additional preparation. The Council mandate, however, has replaced this conditional mechanism with hard calendar dates: 2 December 2027 for standalone high-risk AI systems (those classified under Annex III of the AI Act), and 2 August 2028 for high-risk AI systems embedded in products regulated by sectoral harmonisation legislation (Annex I), such as medical devices and industrial machinery [1].
This shift from conditional to fixed deadlines is analytically significant. While it ostensibly provides industry with greater planning certainty, it also decouples compliance obligations from the actual state of standards development — creating a scenario where businesses will be expected to comply with requirements for which the technical specifications may still be in flux. The European Standardisation Organisations (CEN, CENELEC, ETSI) are engaged in an unprecedented effort to develop AI-specific harmonised standards, but the process has been consistently behind schedule. The Council's approach represents a calculated gamble: betting that an 18-month extension provides sufficient buffer for standards maturation while avoiding the legal uncertainty of an open-ended postponement.
| Provision | Original deadline | Council position |
|---|---|---|
| Standalone high-risk AI (Annex III) | 2 August 2026 | 2 December 2027 |
| Embedded high-risk AI (Annex I products) | 2 August 2027 | 2 August 2028 |
| AI regulatory sandboxes (national level) | 2 August 2026 | 2 December 2027 |
| Prohibited AI practices (CSAM/deepfakes) | 2 February 2025 | Expanded scope, immediate effect |
Prohibition of AI-Generated Non-Consensual Intimate Content
The Council has introduced a new prohibited practice into Article 5 of the AI Act: a blanket ban on AI systems that generate non-consensual sexual or intimate imagery, including child sexual abuse material (CSAM). In concrete terms, this targets what is commonly known as 'deepfake pornography' — the use of generative AI to synthesize photorealistic nude or sexually explicit images and videos depicting real individuals who never consented to the creation of such material. The technology works by mapping a person's facial likeness — often scraped from ordinary social media photographs — onto fabricated intimate content, producing results that are increasingly indistinguishable from authentic imagery. What was once the domain of sophisticated visual effects studios has become accessible to anyone with a consumer-grade laptop and a free AI tool.
The scale and urgency of the problem is illustrated by a cascade of high-profile incidents. In January 2024, AI-generated sexually explicit deepfake images of Taylor Swift circulated on X (formerly Twitter), with a single post garnering over 47 million views before removal — demonstrating the viral velocity that outpaces platform moderation [4]. In July 2024, a Spanish court in Almendralejo sentenced 15 schoolchildren to a year's probation for using AI applications to generate nude images of their female classmates from social media photographs and distributing them via WhatsApp groups — a case that laid bare the technology's penetration into adolescent behavior. Then, in December 2025, the 'Grok scandal' erupted: an update to xAI's chatbot Grok, integrated into X, enabled users to edit public photographs into sexually suggestive scenarios. Research conducted during a single week found that approximately 6,700 sexualized images were being generated per hour, with 2% of a 20,000-image sample appearing to depict individuals under 18. The European Commission opened a formal DSA investigation into X on 26 January 2026, and xAI acknowledged 'lapses in safeguards.' It is this convergence of consumer accessibility, viral distribution, and the targeting of minors that directly catalyzed the Council's decision to elevate the prohibition to Article 5 — the AI Act's most severe category of outright ban, reserved for practices deemed fundamentally incompatible with EU values, regardless of disclaimers or labeling [1].
A critical question the regulation leaves deliberately open is the precise boundary of 'intimate content.' The Grok investigation is instructive: many of the flagged images depicted individuals not in full nudity but in swimwear, lingerie, or transparent clothing — scenarios that could plausibly occur in everyday life. If a person is photographed on a public beach in a bikini by their own volition, is a fabricated AI image of the same person in a different bikini inherently 'intimate'? The emerging legal consensus suggests that the operative criterion is not the degree of undress per se, but the combination of non-consensuality and sexualizing intent. A synthetically generated image designed to place a real person in a sexual or degrading context — regardless of whether it depicts full nudity — likely falls within Article 5's prohibition. Conversely, non-intimate deepfakes (such as fabricated video of a politician delivering a speech they never gave) remain outside the ban's scope, subject instead to the AI Act's Article 50 transparency obligations, which mandate clear disclosure of AI-generated content. This deliberate asymmetry — absolute prohibition for sexualized fabrication, labeling requirements for everything else — will inevitably be tested in national courts, and the resulting jurisprudence will shape the regulation's practical reach for years to come.
Governance Architecture: AI Office Competences and Registration
The Council mandate introduces several targeted adjustments to the AI Act's governance architecture. The competences of the EU AI Office — established to supervise general-purpose AI models — have been clarified with respect to vertically integrated providers (companies that develop both the foundational model and the downstream application). The Council position maintains AI Office oversight as the default for such integrated systems but carves out explicit exceptions for domains where national regulatory authority remains primary: law enforcement, border management, judicial institutions, and financial regulation [1].
Additionally, the Council has reinstated mandatory registration requirements for AI systems in the EU database, even where providers self-assess their systems as falling below the high-risk threshold. This addresses a significant compliance gap identified during early implementation — without a mandatory registry, there was no systematic mechanism for regulators to verify whether self-exemption claims were legitimate or whether providers were strategically misclassifying their systems to avoid scrutiny.
Implications for Industry and the Competitive Landscape
The Council's position reflects a measurable shift in the balance between regulatory ambition and industrial reality. Three specific provisions illustrate this recalibration. First, the extension of SME-specific regulatory exemptions to small mid-cap companies (SMCs) — enterprises with up to 500 employees — broadens the pool of firms eligible for lighter-touch compliance pathways. Second, the reframing of AI literacy obligations from a mandatory requirement for operators to a state-level encouragement and support mechanism effectively removes a compliance burden that had drawn significant industry pushback. Third, the deferral of national AI regulatory sandbox requirements to December 2027 acknowledges that most member states lack the institutional capacity to operationalize these innovation-testing environments within the original timeframe.
These provisions should be read against the backdrop of an intensifying global AI governance race. While the EU was the first jurisdiction to enact comprehensive horizontal AI legislation, the regulatory lead time required for full implementation has created a competitive asymmetry. US-based AI companies operate under a largely voluntary compliance framework, while Chinese firms benefit from a state-directed approach that can rapidly adjust regulatory parameters. The EU's Omnibus revision can thus be interpreted not merely as a technical simplification exercise, but as a strategic recalibration to prevent the AI Act from becoming a competitive liability before it delivers its intended safety benefits.
Bias Detection and Data Processing
A technically significant provision concerns the processing of special categories of personal data (as defined under GDPR Article 9) for the purpose of bias detection and correction in AI systems. The Council mandate permits such processing but reinstates a 'strict necessity' standard — a higher threshold than the Commission had proposed. This reflects the tension between two legitimate objectives: enabling providers to build fairer AI systems by analyzing sensitive demographic data, and protecting individuals from disproportionate processing of their most intimate information. The practical implications are considerable for any AI system operating in domains with historically documented bias patterns — hiring algorithms, credit scoring, facial recognition, and predictive policing, among others [1].
Outlook: Trilogue Dynamics and Implementation Uncertainty
The Council's mandate now proceeds to trilogue negotiations with the European Parliament, which is expected to hold a committee vote on its own position on 18 March 2026. Several areas of potential friction are already identifiable. The Parliament has historically been the more rights-protective co-legislator, and may resist the deadline extensions if they are perceived as weakening the AI Act's enforcement timeline. The CSAM prohibition provision, conversely, is likely to attract broad cross-institutional support given the European Parliament's prior work on the regulation of online harms.
A critical implementation risk remains: if trilogue negotiations extend beyond August 2026 — the original application date for high-risk obligations — a period of legal ambiguity could emerge. Providers would face the question of whether to comply with the existing, formally applicable requirements or anticipate the amended deadlines. This regulatory limbo, while technically manageable through Commission guidance, underscores the broader challenge of amending major regulatory frameworks while their provisions are actively entering into force.
Streamlining the AI rules is essential for ensuring the EU's digital sovereignty. As presidency, we worked on this proposal with urgency, reaching a swift agreement to facilitate the timely application of the AI act. The proposal will bring greater legal certainty, make the rules more proportionate and ensure more harmonised implementation across member states.
The Digital Omnibus on AI represents a maturation of the EU's regulatory approach to artificial intelligence — one that acknowledges the distance between legislative ambition and administrative implementation. Whether this recalibration will strengthen or dilute the AI Act's global regulatory influence remains an open question, contingent not only on the outcome of trilogue negotiations but on the broader trajectory of international AI governance coordination in the years ahead.
📚 Sources & References
| # | Source | Link |
|---|---|---|
| [1] | Council agrees position to streamline rules on Artificial Intelligence |
|
| [2] | Regulation (EU) 2024/1689 — Artificial Intelligence Act |
|
| [3] | EU AI Act: first regulation on artificial intelligence |
|
| [4] | Taylor Swift deepfakes spark calls in Congress for new legislation |
|