Summary
On 19 November 2025, the European Commission (the “Commission”) published its Digital Omnibus on AI, a legislative proposal to streamline implementation of the EU Artificial Intelligence Act (the “Act”). The proposal addresses delays in harmonized standards and guidance while adjusting compliance timelines and reducing administrative burdens.
Key points:
- High-risk AI system compliance deadlines extended, conditional on availability of standards and guidance, with backstop dates of 2 December 2027 for standalone high-risk use cases and 2 August 2028 for AI embedded in regulated products.
- Registration requirements eliminated for AI systems deemed not high-risk under Article 6(3).
- Small and Medium Enterprises (SME) benefits extended to small mid-cap companies, with simplified quality management system requirements.
- AI Office gains exclusive enforcement powers over general-purpose AI models and systems integrated into very large online platforms.
- AI literacy obligations shifted from providers to member states and the Commission.
What Is the Digital Omnibus on AI?
The Digital Omnibus on AI is a legislative proposal that amends specific provisions of the EU Artificial Intelligence Act. The term “omnibus” refers to a legislative tool used to amend multiple provisions simultaneously.
The AI Act, adopted as Regulation (EU) 2024/1689, established a comprehensive regulatory framework for artificial intelligence. Industry feedback revealed implementation challenges: businesses faced uncertainty about risk classifications, and essential support infrastructure, including harmonized standards and official guidance, remained incomplete.
In several member states, competent authorities responsible for enforcement had yet to be designated.
The Digital Omnibus addresses these issues through targeted amendments that adjust timelines, clarify provisions, and reduce administrative burdens, developments closely monitored by any artificial intelligence law firm advising on EU regulatory compliance.
How Do the Compliance Timelines Change for High-Risk AI Systems?
Under the original Act, rules for high-risk systems were scheduled to apply from 2 August 2026 for standalone high-risk use cases such as biometric identification and AI used in employment, and from 2 August 2027 for AI embedded in products subject to existing EU safety legislation such as medical devices.
The Digital Omnibus links application of these requirements to the availability of supporting compliance infrastructure.
The rules take effect once the Commission confirms completion of relevant harmonized standards, common specifications, and guidance. Following such confirmation, there would be a six-month transition period for compliance with standalone high-risk use cases and twelve months for AI embedded in regulated products.
If standards development faces delays, backstop dates apply: standalone high-risk use cases from 2 December 2027 and AI embedded in regulated products from 2 August 2028. This provides up to 16 months of additional preparation time.
What Happens to AI Systems Already on the Market?
Under amended Article 111, providers of high-risk AI systems lawfully placed on the EU market before the applicable compliance date may continue to place identical units on the market without retrofitting, provided the design remains unchanged.
Exception: Where legacy high-risk AI systems are intended for use by public authorities, providers must achieve full compliance by 2 August 2030.
The proposal also extends the grace period for transparency obligations under Article 50(2). For systems generating synthetic content placed on the market before 2 August 2026, marking obligations would apply from 2 February 2027.
Are Registration Requirements Changing?
Yes. The Digital Omnibus eliminates registration obligations for certain AI systems. Under the current Act, providers who determine their AI system falls within a standalone high-risk use case but does not pose significant risk under Article 6(3) must still register in the EU database for high-risk AI systems.
The proposal removes this registration requirement. However, providers must document their risk assessments and make them available to market surveillance authorities upon request, an obligation that will require close coordination between product teams and any advising technology law firm.
What Relief Is Available for Smaller Companies?
The Digital Omnibus extends SME benefits to small mid-cap companies (SMCs), defined as entities that meet two of three thresholds: fewer than 750 employees, net turnover of €150 million or less, or balance sheet total of €129 million or less.
The proposal expands simplified quality management system requirements to all SMEs, not only microenterprises. Both SMEs and SMCs may implement quality management systems proportionate to their organization size.
How are Conformity Assessments Clarified?
For AI systems that fall within both categories of AI embedded in products subject to existing EU safety legislation and standalone high-risk use cases, the proposal clarifies that providers must follow the conformity assessment procedures applicable to AI embedded in regulated products.
What Opportunities Exist for Testing AI Systems?
The proposal extends Article 60 testing opportunities to high-risk AI systems embedded in regulated products, allowing providers of AI embedded in regulated products to conduct controlled trials before full certification.
The Commission’s AI Office would gain authority to establish an EU-level regulatory sandbox for general-purpose AI models, complementing existing national sandboxes. Member States must strengthen cross-border cooperation on sandbox initiatives.
How Does Enforcement Change Under the Proposal?
The AI Office would have exclusive competence for supervision and enforcement of:
- Annex III AI systems based on general-purpose AI models where the model and system are developed by the same provider
- AI systems integrated into very large online platforms or very large online search engines under the Digital Services Act
For such systems classified as high-risk and subject to third-party conformity assessment under Article 43, the Commission would conduct pre-marketing conformity assessments. This shifts enforcement from member state authorities to centralized oversight.
What Changes Apply to AI Literacy and Bias Mitigation?
The current Act requires all providers and deployers to ensure sufficient AI literacy among their staff. The Digital Omnibus removes this direct obligation, instead requiring the Commission and member states to encourage such measures.
The proposal broadens the existing permission to process special category data for bias detection and correction.
While the current Act limits this to high-risk AI systems, the Digital Omnibus extends it to all AI providers, deployers, systems and models, regardless of risk classification. For high-risk systems without model training, the derogation is limited to dataset testing.
What Does the Digital Omnibus Fail to Address?
Despite offering some procedural relief, the proposal leaves several critical industry concerns unresolved:
- Lack of definitional clarity – Essential terms like “provider” remain ambiguous in the Act itself, with clarification relegated to non-binding guidelines that cannot provide legal certainty. This includes uncertainty around downstream modifications of general-purpose AI models.
- Redundant fundamental rights assessments – The proposal retains Article 27’s fundamental rights impact assessment requirement despite significant overlap with existing GDPR data protection impact assessments. An enhanced single assessment would eliminate unnecessary duplication.
- Limited research exemptions – The research exemption excludes real-world testing and applies only to systems developed for the “sole purpose” of scientific research, potentially excluding systems whose research outputs inform commercial development like pharmaceuticals or medical devices.
- Insufficient sandbox benefits – While the proposal expands regulatory sandboxes, it fails to empower competent authorities to certify tested systems as compliant, which would create a meaningful presumption of conformity and genuine regulatory value.
- Risk of fragmentation – Article 82 still permits national authorities to impose additional measures beyond the Act’s requirements when compliant systems are deemed risky, threatening regulatory consistency across member states.
What is the Legislative Timeline?
The Digital Omnibus on AI must proceed through the EU’s ordinary legislative procedure, with consideration by the European Parliament and the Council of the European Union. The legislative process is expected to take several months.
The Commission submitted the AI-specific amendments as a standalone package separate from the broader Digital Omnibus regulation. This separation is intended to accelerate adoption.
Adoption before August 2026 is necessary to avoid a scenario where the original high-risk AI requirements take effect before supporting standards and guidance are available. Delays could result in compliance obligations without access to necessary technical specifications and guidance.
Key Takeaways
The Digital Omnibus on AI recalibrates the EU’s approach to artificial intelligence regulation by addressing practical implementation challenges.
The proposal extends compliance timelines conditional on standards availability, simplifies registration and documentation requirements, and centralizes enforcement for significant AI systems.
Businesses developing or deploying high-risk AI systems gain additional preparation time, but must remain ready for earlier application if the Commission determines that adequate compliance support exists, particularly where AI governance overlaps with obligations typically handled by a data protection law firm and lawyers advising on EU regulatory risk.
Authors: Shantanu Mukherjee, Alan Baiju























