Inside the Irish General Scheme of the Regulation of Artificial Intelligence Bill 2026

With the publication of the Irish General Scheme of the Regulation of Artificial Intelligence Bill 2026, Ireland has set out how it intends to implement and operationalise the EU AI Act at national level. While the EU AI Act establishes the overarching rules, this Irish legislation builds the institutional and procedural architecture that will make those rules work in practice.

For businesses developing, deploying or procuring AI systems, the Bill provides important insight into how enforcement, supervision and cooperation will function domestically. Although the tone of the General Scheme is technical, the overall feel of the policy is about striking a balance between innovation and competitiveness, as well as governance and protection of fundamental rights.

What Is A National AI Regulatory Sandbox?

One of the most significant features of the Bill is the establishment of a national AI regulatory sandbox, or Ireland’s participation in one established at EU level. Regulatory sandboxes are supervised environments in which companies can develop and test innovative technologies in cooperation with regulators before full market deployment. In the context of AI, this is particularly valuable given the complexity of classification, conformity assessment and compliance obligations under the EU AI Act.

The sandbox will be administered by Ireland’s AI Office, which may collaborate with market surveillance authorities and other national bodies. It will have the power to issue guidance, set participation criteria, enter into EU-level arrangements and monitor activities taking place within the sandbox environment. The overarching objectives mirror those of the EU AI Act itself: innovation and competitiveness, facilitating the development of an AI ecosystem, supporting evidence-based regulatory learning and accelerating access to the EU market for SMEs and start-ups.

Notably, small and medium-sized enterprises are to be given priority access, free of charge, and participation procedures must be clear and accessible. This reflects a deliberate policy choice. Rather than viewing regulation as a barrier to entry, the Irish framework attempts to position compliance as an integrated part of product development. For early-stage companies, engagement with regulators at an earlier stage may reduce the risk of costly redesign or enforcement action at a later point.

However, the sandbox is not a regulatory exemption zone. Where personal data is processed as part of sandbox activities, the Data Protection Commission must be involved. GDPR obligations continue to apply in full. The sandbox model is therefore designed to support compliant innovation, not to dilute existing data protection standards. Its formal commencement will depend on the adoption of relevant EU implementing acts, anticipated in 2026, but the legislative foundation is now being put in place.

Coordinated Oversight and Controlled Data Sharing

The Bill also clarifies the role of the AI Office as a coordinating authority. In practice, this means it may receive complaints or information relating to AI systems that fall within the remit of other competent authorities. Given that AI regulation intersects with consumer protection, product safety, equality law, financial regulation and data protection, this coordinating function is essential to prevent fragmentation.

To enable effective coordination, the Bill provides a legal basis for the AI Office to disclose personal data in defined circumstances. Disclosure may occur where it is considered necessary and proportionate for the effective implementation of the AI Act or for directing a complaint, in whole or in part, to the appropriate competent authority. Information may also be shared with EU institutions, designated fundamental rights bodies, and, where relevant to the prevention or investigation of criminal offences, with An Garda Síochána.

These powers are accompanied by safeguards. Any disclosure must meet the threshold of necessity and proportionality. Where special categories of personal data are involved, processing must comply with the General Data Protection Regulation and the Data Protection Act 2018. Individuals are to be notified of disclosures where practicable, and personal data processed for law enforcement purposes must be permanently deleted once no longer required. In addition, any data-sharing agreement entered into by the AI Office must comply with governance requirements under Irish law, with modifications to reflect the specific context of AI regulation.

The inclusion of parliamentary oversight mechanisms and the potential requirement for data protection impact assessments before additional bodies are prescribed for data sharing further reinforces accountability. For businesses, this structure highlights that AI-related complaints or investigations may involve coordinated regulatory engagement rather than isolated oversight.

Market Surveillance and Enforcement Powers

The Bill designates relevant Market Surveillance Authorities (MSAs) to oversee compliance of high-risk AI systems, in alignment with the EU Market Surveillance Regulation. Ireland has chosen not to derogate from the default EU approach, meaning that existing authorities responsible for certain product categories will automatically assume responsibility for corresponding high-risk AI systems.

These authorities are granted significant investigative and corrective powers. They may require economic operators to provide technical documentation, specifications, compliance data and supply chain information. They may conduct unannounced inspections, enter premises used for business purposes and initiate investigations on their own initiative. Where non-compliance is identified, they may require corrective action, restrict or prohibit the placing of systems on the market, order withdrawal or recall and impose penalties in accordance with the AI Act.

In serious cases, including where a product presents a significant risk and other measures have proven ineffective, authorities may require the removal of online content relating to a product or restrict access to an online interface. They may also acquire product samples, including under cover identity, and reverse-engineer systems to assess compliance.

A particularly sensitive issue is access to source code. Under the Bill, Market Surveillance Authorities may request access to the source code of a high-risk AI system only where such access is necessary to assess conformity with the AI Act and where other auditing and documentation-based procedures have been exhausted or are insufficient. This establishes source code access as an exceptional measure rather than a routine requirement, balancing effective supervision with protection of intellectual property and commercial confidentiality.

For providers, it is clear that thorough documentation, testing records and compliance processes will be key to demonstrating conformity and minimising the likelihood of intrusive investigative measures.

Integration with Financial Regulation

The General Scheme also amends the Central Bank Act 1942 to create an information-sharing gateway for the Central Bank. Professional secrecy obligations restrict disclosure of confidential information unless there is a clear statutory basis for doing so. By adding the EU AI Act and related instruments to the list of designated enactments, the amendments enable the Central Bank to exchange information with the AI Office, market surveillance authorities, the European Commission and other relevant bodies, subject to EU law requirements.

For financial institutions using AI systems in areas such as creditworthiness assessment, fraud detection, customer analytics or algorithmic trading, this integration signals that AI governance will form part of mainstream regulatory supervision. 

Temporary Authorisation in Exceptional Circumstances

The Bill also provides for the temporary authorisation of certain high-risk AI systems without prior completion of full conformity assessment procedures in exceptional circumstances. This mechanism reflects provisions within the EU AI Act allowing derogations where necessary to address urgent public needs, including public health, safety or national security.

Such authorisations would be strictly limited in scope and duration, subject to conditions and ongoing monitoring, and capable of revocation if risks to health, safety or fundamental rights emerge. The relevant Market Surveillance Authority would be required to notify the AI Office of any derogation and to publish a summary of the decision, unless publication would compromise public safety or national security.

This is not designed as a commercial fast track. Rather, it recognises that in rare and urgent situations, such as emergency response during a natural disaster, there may be a need for rapid deployment of AI systems before formal conformity assessment can be completed.

Cooperation with the EU AI Office and General-Purpose AI Models

One of the more nuanced additions in the General Scheme concerns how Ireland will deal with general-purpose AI models, the foundation systems that can be adapted and integrated into countless downstream applications. Under the EU AI Act, these models are subject to specific obligations set out in Chapter V, and oversight at EU level sits primarily with the EU AI Office.

The Irish Bill recognises that supervision of general-purpose AI cannot operate purely at national level. Where a Market Surveillance Authority in Ireland is examining an AI system built on top of a general-purpose model, it may need technical insight or documentation that sits beyond the immediate reach of the Irish provider. In such cases, the legislation allows for formal cooperation with the EU AI Office, including requests for assistance or access to relevant information.

This is particularly significant in cross-border scenarios. Many general-purpose AI models are developed and maintained by providers operating across multiple Member States. An Irish authority investigating compliance may not have direct access to the underlying model documentation or technical architecture. The cooperation mechanism ensures that Irish regulators are not limited by jurisdictional barriers or by the complexity of the technology itself.

In practical terms, this creates a layered supervisory system. Irish Market Surveillance Authorities retain responsibility for AI systems placed on the Irish market, but where those systems rely on general-purpose models, the EU AI Office can provide technical support, coordinate enforcement or facilitate joint investigations. For businesses, this means that compliance in relation to general-purpose AI is unlikely to be assessed in isolation. The supply chain, from foundation model provider to system developer, may come under scrutiny where necessary.

Confidentiality and Cybersecurity: Protecting Sensitive Information

A recurring concern among AI developers is the protection of intellectual property and commercially sensitive material during regulatory engagement. The General Scheme addresses this directly by embedding confidentiality obligations into the functions of Market Surveillance Authorities.

Authorities carrying out investigations or assessments must comply with the confidentiality framework set out in the AI Act. This includes protecting trade secrets, proprietary algorithms, technical documentation and, where accessed, source code. As such, effective supervision must not come at the expense of unjustified exposure of commercially valuable information.

Confidentiality is not absolute, but it is strongly emphasised. Information obtained in the course of regulatory activity must be handled in accordance with EU and national law, balancing transparency and enforcement with the protection of intellectual property rights and public security interests. For companies engaging with regulators, this provides a degree of assurance that disclosure during an investigation does not equate to public dissemination.

Alongside confidentiality, the Bill places an explicit obligation on relevant authorities to maintain an adequate level of cybersecurity. This is more than symbolic. Where regulators are receiving sensitive documentation, technical data or system information, the integrity and security of that information becomes critical. The requirement aligns with Ireland’s broader public sector cybersecurity standards and reflects an understanding that regulatory bodies themselves must model the governance expectations placed on industry.

For businesses, this dual emphasis on confidentiality and cybersecurity should reduce concerns that cooperation with regulators might inadvertently expose vulnerabilities or proprietary systems.

Reclassification: When “Non-High-Risk” Is Not the Final Word

The EU AI Act operates on a risk-based classification system, with high-risk AI systems subject to more stringent obligations, including conformity assessment, documentation, transparency and post-market monitoring. Providers are responsible for initially classifying their systems. However, the Irish Bill makes clear that this classification is not immune from review.

If a Market Surveillance Authority has reasonable grounds to believe that a system labelled as non-high-risk in fact meets the criteria for high-risk classification under Annex III of the AI Act, it may initiate an evaluation. This assessment involves examining the system against the relevant legal criteria and any guidance issued at EU level.

Should the authority conclude that the system falls within the high-risk category, it can require the provider to comply with the full suite of obligations applicable to high-risk systems. This may include conducting or completing a conformity assessment, strengthening technical documentation, implementing risk management measures and establishing post-market monitoring processes within a specified timeframe.

If the provider fails to take the required corrective action, enforcement measures may follow, including restriction, withdrawal or prohibition of the system from the market.

This mechanism is central to preserving the integrity of the risk-based approach. Without it, there would be a risk that systems with significant societal impact could be deployed under an incorrect classification, whether through misunderstanding or strategic under-classification. The Bill therefore reinforces that risk designation is a substantive regulatory matter.

For organisations using AI, it is important that risk assessments are carefully reasoned, documented and revisited where system functionality evolves. The classification decision should be defensible, evidence-based and aligned with EU guidance. In a framework built on trust and accountability, classification is likely to be one of the first areas examined if concerns arise.

What Does The Irish Regulation Mean For Businesses?

For organisations operating in Ireland, the key takeaway is that AI governance is becoming embedded within domestic regulatory architecture. Businesses should be reviewing whether their systems are likely to be classified as high-risk under the EU AI Act, ensuring that documentation and compliance processes are robust, and considering early engagement with supervisory authorities where appropriate.

Ireland’s approach is not limited to transposing EU rules mechanically. It is about building a functioning supervisory network capable of addressing technological complexity, cross-border development and evolving risk profiles.

For businesses operating in this space, compliance will increasingly involve engagement with a web of national and EU actors. Governance structures, documentation standards and internal risk assessments should be designed with that broader ecosystem in mind. The Regulation of Artificial Intelligence Bill 2026 does not simply add another layer of regulation; it integrates AI governance into Ireland’s existing regulatory culture, with cooperation, accountability and proportionality as its guiding principles.

Taken as a whole, the General Scheme of the Regulation of Artificial Intelligence Bill 2026 demonstrates that Ireland is moving beyond high-level policy statements to detailed regulatory infrastructure. The emphasis on structured innovation through sandboxes, coordinated oversight through the AI Office, robust market surveillance powers and integration with existing sectoral regulators reflects a comprehensive approach.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 

We are here to help you, get in contact with us today for more information.



Next
Next

Singapore Unveils World’s First Framework for Agentic AI Governance