Luxembourg’s financial and technology sectors are entering a new phase of Artificial Intelligence (AI) governance. The European Union’s Artificial Intelligence Act (AI Act), formally adopted in 2024, establishes the first comprehensive legal framework for AI worldwide. At the same time, the European Commission has introduced the Digital Omnibus proposal, designed to simplify and align digital regulations, including the AI Act, data protection rules, and cybersecurity legislation.
For Luxembourg, where AI adoption is accelerating in financial services, fintech, and digital innovation, these developments bring both new compliance obligations and strategic opportunities. Organisations must now build structured AI governance frameworks to manage regulatory risk while ensuring innovation remains competitive within the EU’s evolving digital ecosystem.
The AI Act (Regulation (EU) 2024/16891) introduces a risk-based regulatory approach, where obligations depend on the potential impact of an AI system on safety and fundamental rights. Its objective is to ensure trustworthy AI while supporting technological innovation across the European single market.
AI risk categories
The AI Act classifies AI systems into four risk levels, with obligations increasing according to the potential impact on safety and fundamental rights.
The figure below illustrates the four AI risk categories and the key obligations associated with each.
The AI Act entered into force in August 2024, with obligations phased in gradually.
Key milestones include:
Non-compliance may result in significant penalties, with fines reaching up to €35 million or 7% of global annual turnover depending on the violation.
These enforcement levels elevate AI governance from a technical matter to a board-level compliance priority.
Shortly after the adoption of the AI Act, the European Commission recognised that the growing body of EU digital legislation could create operational complexity for organisations navigating overlapping compliance obligations.
In response, the Commission introduced the Digital Omnibus Regulation Proposal2 in 2025, aimed at streamlining the EU digital regulatory framework. The initiative focuses primarily on areas such as data protection, data governance, and cybersecurity, including legislation such as the GDPR, the Data Act, and NIS2, with the goal of improving consistency and reducing duplication across digital regulations.
As part of this package, the Commission also proposed targeted amendments to the AI Act, referred to as the Digital Omnibus on AI Regulation3. The proposal aims to ease implementation challenges while preserving the core safeguards of the Act.
Importantly, the proposal does not alter the AI Act’s risk-based framework. Instead, it focuses on improving implementation by:
The following sections outline some of the most significant amendments proposed to support a more practical implementation of the AI Act.
1. Conditional and delayed application of High-Risk AI requirements
The Omnibus proposal introduces greater flexibility in the timeline for applying the AI Act’s requirements for high-risk AI systems. Instead of fixed dates, the application of the main obligations in Chapter III would be linked to the availability of key implementation tools, such as harmonised standards, common specifications, and regulatory guidance.
Once the European Commission confirms that these compliance tools are in place, organisations would be given a transition period before the requirements become applicable. In practice, this mechanism could shift the effective implementation of high-risk obligations by approximately one to two years compared with the original timeline, depending on when the necessary standards and guidance are finalised.
The aim of this amendment is to ensure that organisations are not required to comply with complex technical obligations before the supporting regulatory framework is sufficiently developed.
2. AI literacy obligation becomes a policy encouragement rather than a direct duty
The proposal also revises the AI Act’s provisions on AI literacy. The current Article 4 requires providers and deployers of AI systems to ensure that their staff have a sufficient level of AI literacy.
Under the Omnibus proposal, this obligation would be softened. Instead of imposing a direct compliance duty on organisations, the provision would place responsibility on the European Commission and Member States to encourage AI literacy initiatives among organisations that develop or use AI systems.
The intention behind this change is to reduce horizontal compliance obligations that may be difficult to operationalise across diverse sectors while still promoting awareness and competence in the use of AI.
3. New legal basis for processing sensitive data to detect bias
Another significant amendment introduces a new legal framework for the processing of sensitive personal data in the context of bias detection and mitigation.
The proposal inserts a new Article 4a, which allows providers and deployers of AI systems to process special categories of personal data when this is necessary to detect and correct bias in AI systems. This change responds to concerns that organisations may currently lack a clear legal basis to analyse demographic data needed to assess algorithmic fairness.
At the same time, the proposal includes safeguards to ensure that such processing remains proportionate and subject to appropriate protections.
4. Extension of SME support measures to small mid-cap enterprises
The Omnibus proposal expands several proportionality measures currently available to SMEs so that they also apply to small mid-cap enterprises (SMCs). This new category reflects the reality that many AI developers and technology companies fall between traditional SME thresholds and large corporations.
Under the proposal, SMCs would benefit from several simplification mechanisms already provided to SMEs, including:
By extending these provisions, the proposal seeks to reduce compliance barriers for innovative technology companies while maintaining the overall regulatory safeguards of the AI Act.
5. Expansion of AI regulatory sandboxes and creation of an EU-level sandbox
The Omnibus proposal strengthens the AI Act’s innovation framework by expanding the use of AI regulatory sandboxes. In addition to national sandboxes run by Member States, the proposal enables the European AI Office to establish an EU-level regulatory sandbox, allowing certain AI systems to be tested under coordinated European supervision.
The amendment also encourages cross-border cooperation between national authorities and simplifies administrative processes by allowing the sandbox plan and real-world testing plan to be combined where appropriate. The objective is to facilitate experimentation and support the safe development of innovative AI systems within a supervised regulatory environment.
6. Clarification of the grace period for existing High-Risk AI systems
The Omnibus proposal clarifies how the AI Act’s transitional provisions apply to high-risk AI systems already placed on the market before the relevant obligations take effect. The grace period would apply to the specific type or model of an AI system, meaning that additional units of the same system may continue to be deployed without triggering new compliance obligations.
However, if the system undergoes a significant modification, the transitional protection would no longer apply and the updated system would need to comply with the AI Act requirements.
While the Omnibus proposal may adjust timelines and clarify certain implementation aspects of the AI Act, its adoption and final scope remain subject to the EU legislative process. What is already clear, however, is that the core direction of the AI Act will remain unchanged.
For organisations, this means that waiting for the final outcome of the Omnibus discussions should not delay preparation. The EU is firmly moving towards a risk-based governance model for AI, requiring organisations to understand their AI use cases, classify risks, and establish appropriate governance, oversight, and documentation practices.
Companies that wish to adopt AI confidently and at scale should therefore already begin building fit-for-purpose AI governance frameworks. Establishing elements such as AI inventories, risk assessment processes, and clear internal responsibilities will not only support future regulatory compliance but also enable organisations to deploy AI systems more responsibly and strategically.
Notes:
[1] Regulation (EU) 2024/1689, Artificiel Intelligence Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
[2] Digital Omnibus Regulation Proposal, 19 November 2025, https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal
[3] Digital Omnibus on AI Regulation Proposal, 19 November 2025, https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal