How to Prepare for the EU AI Act Deadlines and Requirements
The EU AI Act marks one of the most significant regulatory developments in artificial intelligence globally. It introduces a harmonised framework across all EU Member States, applying not only to organisations established in the EU, but also to those outside the EU that place AI systems on the EU market or use them within the EU.
For many businesses, the challenge is not simply understanding the law, but knowing how to prepare in a way that is practical, proportionate, and aligned with existing operations. While the majority of obligations will apply from August 2026, key provisions are already in force, and regulators expect organisations to begin preparing now. It is also important to note that the EU AI Act is currently subject to a proposed simplification package — the Digital Omnibus — which, if finalised, would alter certain timelines and obligations. Trilogue negotiations between the European Parliament, the Council, and the Commission are ongoing, and businesses should monitor developments closely.
The Digital Omnibus: What It Is and Where It Stands
On 19 November 2025, the European Commission published its Digital Omnibus — a package of proposed simplification measures forming part of the EU's broader effort to reduce administrative burden and foster competitiveness. The Digital Omnibus includes targeted amendments to the AI Act specifically designed to address practical implementation challenges identified through stakeholder consultation, such as delays in establishing standards and in designating national competent authorities.
Since then, the legislative process has moved swiftly. On 13 March 2026, the Council of the EU agreed its general approach to streamline AI rules. On 27 March 2026, the European Parliament adopted its own position by 569 votes in favour, paving the way for trilogue negotiations to begin. The Cypriot Presidency has expressed a clear ambition to reach political agreement by April or May 2026, given the pressure to finalise amendments before the AI Act's general application date of 2 August 2026.
However, businesses should not treat the Omnibus as settled law. The final text is still under negotiation, and if no agreement is reached in time, the current deadlines under the original AI Act will continue to apply. Organisations should therefore plan compliance efforts against the existing statutory framework while tracking Omnibus developments as they unfold.
This guide sets out the key steps businesses should take to get ready for August 2026.
Who Is Affected By The EU AI Act?
The EU AI Act applies broadly across the entire AI value chain and is designed to capture a wide range of actors involved in the development and use of artificial intelligence. This includes organisations that develop AI systems, those that deploy them within their operations, and providers of general-purpose AI models. In practice, many businesses will find that they fall into more than one category depending on how they interact with AI, whether through creating their own tools, integrating third-party solutions, or embedding AI into their products and services.
Article 2 of the Act defines exactly who the Act applies to. For clarity, here is the applicable section taken from the regulation.
"(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
(b) deployers of AI systems that have their place of establishment or are located within the Union;
(c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
(d) importers and distributors of AI systems;
(e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
(f) authorised representatives of providers, which are not established in the Union;
(g) affected persons that are located in the Union."
This distinction of where your business actually sits is important because the obligations imposed by the Act vary depending on the role an organisation plays. For example, a company developing an AI tool for commercial distribution will face different requirements from a business that simply uses that same tool internally. However, both will still have compliance responsibilities, and understanding where those responsibilities sit is essential to managing risk effectively.
It is also important to recognise that the AI Act is built on a risk-based framework. Not all AI systems are treated equally under the legislation. Instead, the level of regulatory scrutiny depends on the potential impact of a system on individuals, fundamental rights, and society more broadly. This approach means that businesses must move beyond simply identifying their AI use and begin assessing the level of risk associated with each system.
Step 1: Identify and Map Your AI Use
In light of what we have just mentioned about different obligations being imposed on different organisations depending on their function, the starting point for any organisation is to gain a clear and comprehensive understanding of how AI is being used across the business.
In many cases, AI adoption has developed organically, with different teams implementing tools independently or relying on AI-enabled features embedded within existing software. As a result, organisations may not have a complete or centralised view of their AI landscape.
To address this, businesses should undertake a structured mapping exercise to identify all AI systems in use or under development. This should include internally developed tools, third-party solutions, and any AI functionality integrated into products or services offered to customers. Without this level of visibility, it is difficult to assess compliance obligations or identify areas of potential risk.
At the same time, organisations should determine their role in relation to each AI system. Whether acting as a provider, deployer, or both will directly influence the obligations that apply under the Act. Establishing this clarity at an early stage is critical, as it forms the foundation for all subsequent compliance efforts.
Step 2: Assess Risk and Identify Prohibited Uses
Once AI systems have been identified, the next step is to assess and classify them according to the risk categories set out in the AI Act. This classification exercise is key to the framework, as it determines whether a system is prohibited, subject to strict regulatory requirements, or only lightly regulated.
The Act prohibits a limited category of AI uses that are considered fundamentally incompatible with EU values. These include practices such as social scoring, certain forms of biometric categorisation, and systems that exploit vulnerabilities or manipulate individuals in ways that may cause harm. Under the proposed Digital Omnibus, both the Council and the European Parliament have also called for a new prohibited practice targeting AI systems that generate or manipulate realistic, sexually explicit or intimate images of identifiable individuals without their consent — commonly referred to as AI "nudification" systems. This prohibition would not apply where effective technical safeguards are in place to prevent such content from being generated. Organisations should carefully review their existing and planned AI use cases to ensure that none fall within these prohibited categories, as such uses will not be permitted under any circumstances.
Beyond prohibited uses, particular attention should be given to AI systems that may be classified as high-risk. These typically include systems used in areas such as employment decisions, education, access to essential services, and creditworthiness assessments. High-risk systems are subject to extensive regulatory requirements and will require a more structured and resource-intensive compliance approach.
Step 3: Conduct a Gap Analysis for High-Risk AI
For AI systems that are likely to fall within the high-risk category, organisations should carry out a detailed gap analysis against the requirements of the AI Act. This involves assessing whether existing processes and controls adequately address key areas such as risk management, data governance, documentation, transparency, and human oversight.
In many cases, businesses, particularly those operating in regulated sectors, may already have elements of these frameworks in place. However, the AI Act introduces a more formalised and prescriptive structure, which may require organisations to enhance or adapt their existing governance arrangements to meet the new standards.
This stage also provides an opportunity to consider whether additional assessments will be required before deploying certain AI systems. In particular, organisations may need to carry out fundamental rights impact assessments to evaluate the potential impact of AI on individuals and ensure that appropriate safeguards are in place.
The Test for High-Risk AI
An AI system is considered "high-risk" under the EU AI Act, Article 6, mainly in two situations. First, if it is built into a product (or is the product itself) that falls under EU safety legislation and must go through a third-party safety assessment before being sold or used.
Second, if it is specifically listed as high-risk under EU law because of how it is used. However, there are exceptions: even if an AI system is listed, it may not be treated as high-risk if it only performs a narrow task, simply helps improve a human decision that has already been made, detects patterns without replacing human judgment, or is only used as a preparatory step in a decision-making process.
That said, any AI system used to profile individuals is always considered high-risk. If a company believes its AI system should not be classified as high-risk, it must document and justify that assessment before launch, register the system, and provide evidence to regulators if requested. The European Commission will also publish guidance and practical examples to clarify how these rules apply and will update them over time as technology and market practices evolve.
Step 4: Establish Governance and Accountability
Preparing for the AI Act requires more than technical adjustments; it demands a coordinated organisational response. Effective compliance will depend on clear governance structures and well-defined accountability across different parts of the business.
Organisations should identify who is responsible for overseeing AI compliance and ensure that roles and responsibilities are clearly documented. This includes establishing processes for approving new AI use cases, managing changes to existing systems, monitoring performance, and responding to incidents or complaints.
A key aspect of this framework is human oversight. The AI Act places significant emphasis on ensuring that AI systems do not operate without appropriate supervision. Businesses must therefore ensure that individuals tasked with overseeing AI systems have the necessary authority, expertise, and understanding of the associated risks.
On registration, note that while the Commission's Omnibus proposal originally suggested removing the obligation to register AI systems in the EU database where a provider self-assesses those systems as non-high-risk under Article 6(3), both the Council and Parliament have rejected this and reinstated the registration requirement — albeit with proposals to streamline the information that needs to be provided. Governance frameworks should therefore continue to account for registration obligations.
Step 5: Review Documentation and Contracts
The AI Act introduces detailed documentation requirements, particularly for high-risk systems. Organisations will need to maintain comprehensive records that explain how their AI systems function, the purposes for which they are used, and any limitations associated with their deployment.
These requirements extend beyond internal documentation. Businesses should also review their contractual arrangements with AI vendors, suppliers, and partners to ensure that they have access to the information needed to demonstrate compliance. This includes technical documentation, performance data, and clear allocation of responsibilities across the AI value chain. Taking a proactive approach to contract management will be essential, as organisations cannot meet their obligations in isolation where third-party AI systems are involved.
Step 6: Prepare for Transparency Obligations
Transparency is a central theme of the AI Act and reflects a broader regulatory focus on trust and accountability in AI systems. In certain situations, organisations will be required to inform individuals when they are interacting with AI or when AI is being used to make or support decisions that affect them.
In addition to these disclosure obligations, providers of generative AI systems will be required to ensure that AI-generated audio, image, video, and text content is machine-readable and detectable as artificially generated or manipulated — commonly referred to as the "watermarking" obligation. Under the proposed Digital Omnibus, the European Parliament has proposed a deadline of 2 November 2026 for compliance with this obligation, shortening the Commission's originally proposed six-month grace period. Organisations developing or deploying generative AI tools should factor this earlier date into their planning.
While transparency obligations are a legal requirement, they also carry important reputational implications. Businesses that communicate clearly and openly about their use of AI are more likely to build trust with customers, employees, and other stakeholders.
To support this, organisations should develop internal guidance to ensure that staff understand when transparency obligations apply and how disclosures should be communicated in a clear and accessible manner.
Step 7: Invest in AI Literacy
The AI Act places explicit emphasis on AI literacy, recognising that effective compliance depends on the understanding and awareness of those involved in developing, deploying, and overseeing AI systems. This obligation, set out in Article 4 of the Act, has applied since 2 February 2025.
The Digital Omnibus has introduced debate around the future scope of this obligation. The Commission proposed shifting the AI literacy obligation away from individual providers and deployers, instead framing it as a framework to be led by the Commission and Member States. The Council supported this approach. However, the European Parliament's position retains the obligation on providers and deployers directly, while proposing to reduce the standard from ensuring "a sufficient level of AI literacy" to "supporting the improvement of AI literacy" among staff. The Parliament also proposes that the Commission issue practical implementation guidance and encourages public-private partnerships to support broader literacy efforts.
The final position will be determined through trilogue. In the meantime, the existing Article 4 obligation remains in force and enforceable by national market surveillance authorities. For deployers of high-risk AI systems in particular, the obligation to ensure staff are trained to enable proper human oversight remains unaffected regardless of the Omnibus outcome.
Here are some examples of recognised and approved literacy courses and training.
Step 8: Monitor Deadlines and Plan Ahead
The EU AI Act is being rolled out in stages, and proposed changes under the Digital Omnibus may further adjust certain deadlines. Businesses should plan against both the current statutory timeline and the proposed Omnibus amendments.
Under the current AI Act:
From 2 February 2025: Prohibitions on unacceptable-risk AI and AI literacy obligations came into force.
From 2 August 2025: Governance provisions and obligations for general-purpose AI (GPAI) model providers became applicable, along with the establishment of EU governance structures including the AI Board, Scientific Panel, and Advisory Forum. Member States were also required to appoint national regulators and set out penalties under their own laws.
From 2 August 2026: Most remaining obligations under the AI Act apply, including rules for high-risk AI systems listed in Annex III and key transparency requirements. Enforcement begins across both national authorities and EU level, and each Member State is expected to have at least one regulatory sandbox operational.
From 2 August 2027: Rules extend to high-risk AI systems embedded into regulated products, completing the final phase of the Act.
Proposed changes under the Digital Omnibus (subject to trilogue):
Both the Council and the European Parliament support a revised timeline linking the application of high-risk obligations to the availability of implementation standards and support tools. Under these proposals:
High-risk AI systems listed in Annex III (stand-alone systems, including those involving biometrics, critical infrastructure, education, employment, essential services, law enforcement, and border management): rules to apply by 2 December 2027 at the latest.
High-risk AI systems embedded in regulated products (Annex I): rules to apply by 2 August 2028 at the latest.
Watermarking obligations for AI-generated content: proposed deadline of 2 November 2026 under the Parliament's position.
Importantly, these revised dates are not yet law. If trilogue negotiations do not conclude in time, the original 2 August 2026 deadline for high-risk obligations would continue to apply. Organisations should therefore not defer compliance preparation pending the Omnibus outcome.
A Note on SMEs and Small Mid-Cap Enterprises
The AI Act already includes simplified compliance measures for small and medium-sized enterprises (SMEs). The Digital Omnibus proposes to extend these same regulatory support measures to small mid-cap enterprises (SMCs) — defined as enterprises employing fewer than 750 persons and with either an annual turnover not exceeding EUR 150 million or a balance sheet total not exceeding EUR 129 million. Both the Council and the European Parliament support this extension. Businesses that fall within the SMC threshold should monitor this development closely, as it may reduce their compliance burden if adopted in the final text.
Enforcement and Risk Exposure
The AI Act introduces significant financial penalties for non-compliance, reflecting the seriousness with which regulators view the risks associated with AI. Depending on the nature of the infringement, fines can reach up to €35 million or 7% of an organisation's global annual turnover. Lower thresholds apply for other types of breaches, but the potential exposure remains substantial.
Beyond compliance, the EU AI Act is likely to have a broader impact on how businesses approach AI from a strategic perspective. Organisations may need to reassess the viability of certain use cases, particularly where they fall within high-risk categories or require significant investment in governance, documentation, and oversight.
The regulation is also likely to influence procurement decisions, as businesses increasingly look to work with AI vendors that can demonstrate compliance and provide the necessary level of transparency. In addition, the introduction of regulatory checks into the development lifecycle may affect time-to-market for AI-enabled products and services.
Here are some useful materials on the EU AI Act for your business.
Does This Apply to Non-EU Businesses?
One of the most significant features of the EU AI Act is its extraterritorial scope. The regulation applies not only to organisations established within the European Union, but also to those located outside the EU where AI systems are placed on the EU market or used in a way that affects individuals within the EU.
In practical terms, this means that businesses headquartered in other jurisdictions may still be subject to the Act if they offer AI-enabled products or services to EU customers or if their systems are used in EU-based operations. As a result, organisations should not assume that their geographic location determines whether the regulation applies, and should instead assess their exposure based on how and where their AI systems are used.
The EU AI Act is a shift towards structured and risk-based governance of artificial intelligence. While the framework is complex, its underlying objective is clear: to ensure that AI is developed and used in a way that is safe, transparent, and aligned with fundamental rights.
For businesses, this means taking a proactive and informed approach. Organisations that invest time now in understanding their AI use, assessing risk, and implementing appropriate controls will be better positioned to meet regulatory expectations and to use AI as a responsible driver of innovation and growth.
How Can Gerrish Legal Help?
Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property.
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
We are here to help you, get in contact with us today for more information.