2025 Digital Law Roundup

With legislators and supervisory authorities across Europe, China and beyond focusing on how to simplify compliance, strengthen oversight of emerging technologies, and build frameworks that encourage innovation without compromising fundamental rights, it is clear that digital law in 2025 has really taken an interesting turn. 

As the year closes, organisations find themselves navigating a more coordinated yet increasingly demanding set of expectations across AI governance, cybersecurity, data access and use, and cross-border digital operations. This roundup brings together the most notable developments of 2025 and highlights the direction of travel as we head into 2026.

1. Europe’s Digital Package

One of the most significant developments of the year was the European Commission’s introduction of its comprehensive digital package, a wide-ranging initiative designed to streamline regulatory obligations, reduce administrative burden, and create more predictable pathways for businesses scaling across the EU. At its core sits a new digital omnibus, which consolidates rules spanning artificial intelligence, cybersecurity and data into a more coherent framework. The Commission’s goal is to remove layers of unnecessary complexity while maintaining Europe’s long-standing commitment to high standards of data protection, safety and fairness. Early impact assessments point to potential administrative savings of up to €5 billion by 2029, with further efficiencies expected as harmonised digital processes mature.

A central feature of the package is the proposed European Business Wallet, a unified digital identity solution that would allow companies and public bodies to sign, store and exchange verified documents across all 27 Member States. For businesses currently juggling fragmented national systems, the projected €150 billion in annual efficiency savings offers a sense of how transformative this tool could become in managing cross-border operations.

The package also revisits the EU AI Act, introducing pragmatic adjustments designed to make implementation more realistic for businesses, particularly SMEs. Compliance timelines for high-risk AI systems will be tied to the availability of technical standards, with certain obligations expected to take effect only in 2027 or 2028. 

Simplifications to documentation and broader access to regulatory sandboxes will extend beyond SMEs to include small mid-cap companies, easing the compliance burden on a wider set of organisations. Oversight for systems built using general-purpose AI models will be increasingly centralised through the AI Office, reducing the fragmentation that had previously been a concern for industry. Importantly, providers and deployers will have a new legal basis to process special category data for the purpose of detecting and correcting bias, although this permission will remain tightly controlled and subject to detailed safeguards. Overall, these adjustments reflect a more measured and deployment-focused approach to AI governance, one that maintains protection while acknowledging operational reality.

The digital package also proposes a major overhaul of cybersecurity reporting. At present, businesses face overlapping and often duplicative obligations across regimes such as NIS2, GDPR and DORA. The creation of a single secure reporting interface aims to consolidate these requirements into one streamlined process. Although developing such a system will require extensive testing, its eventual impact on reducing operational complexity could be substantial.

A number of longstanding challenges in data and cookie regulation are targeted as well. Updates to the Data Act will clarify rules on data access and cloud switching, provide exemptions for smaller organisations and offer model contracts to reduce legal uncertainty. Cookie management, long criticised for its burdensome user experience, will also be reformed. Centralised preference mechanisms such as browser-level controls are expected to replace repetitive banner prompts, with one-click choices becoming the norm. Taken together, these measures support the Commission’s broader ambition of reducing regulatory overheads by 25% by 2029.

2. Data Union Strategy: Expanding Access While Protecting Sovereignty

Running alongside the digital omnibus is the Commission’s Data Union Strategy, a roadmap that seeks to unlock high-quality datasets for AI development and industrial use while reinforcing the protection of sensitive EU data. The strategy envisions new data labs and controlled access environments that will support research and innovation in areas including health, mobility and climate. A dedicated Data Act helpdesk will guide organisations through complex compliance requirements, helping them understand obligations that have often been difficult to interpret. The strategy also introduces new tools aimed at strengthening data sovereignty, such as frameworks to prevent data leakage and clearer criteria for assessing how EU-origin data is handled outside the bloc. Collectively, this marks a notable shift toward a more enabling approach, one that maintains robust safeguards while facilitating responsible and competitive data use.

3. China: Rapid Expansion of Generative AI Services

Beyond Europe, China continued to scale its regulatory infrastructure for generative AI at an unprecedented rate. By November 2025, authorities had formally registered 611 generative AI services at the national level, and more than 300 applications using these registered models ranging from chat interfaces to embedded AI features, had been recorded by local cyberspace regulators. This rapid expansion reflects both the maturity of China’s registration system under the 2023 Measures on Generative AI and the country’s accelerating pace of deployment. For multinational companies operating in or entering the Chinese market, the trend shows a regulatory environment that is becoming more structured, predictable and administratively demanding.

4. New EDPS Guidance: AI Risk Management in Focus

The European Data Protection Supervisor (EDPS) added further momentum to the regulatory landscape by publishing detailed guidance on risk management for AI systems used by EU institutions. Although directed at the public sector, the guidance is equally instructive for private organisations using AI to process personal data. It reinforces the importance of fairness, urging organisations to address both algorithmic and dataset bias in a systematic way. It also highlights the need for robust accuracy checks, particularly in relation to model drift and the reliability of outputs over time. The guidance emphasises data minimisation during AI development, discouraging broad data collection practices that lack a clear justification. Security risks such as model leakage, inference attacks and insecure APIs are addressed as well, reflecting the increasingly sophisticated threat landscape surrounding AI systems.

A particularly important contribution of the guidance is its distinction between interpretability and explainability. Interpretability concerns understanding how an AI system works internally, whereas explainability focuses on communicating the reasoning behind a specific outcome. This distinction is expected to become central as organisations prepare for the transparency and human oversight requirements set out in the AI Act, making it a critical area for early attention.

5. United Kingdom: Preparing the Workforce for AI and Disruptive Technologies

In the UK, the government expressed a renewed focus on workforce readiness for AI and emerging technologies, commissioning the Financial Services Skills Commission (FSSC) to produce a comprehensive assessment of the sector’s future skills needs. The initiative reflects a recognition that sustaining the UK’s position as a leading global financial centre will depend not only on regulatory agility, but on the ability to attract, train and retain a workforce capable of adopting and scaling new technologies. 

The review, due by mid-2027, will examine which disruptive technologies are likely to reshape business models, regional productivity and customer outcomes over the next decade, and will evaluate the skills required for successful deployment from advanced data literacy and AI oversight to cyber resilience and digital transformation capability. Crucially, the FSSC has been tasked with setting out a concrete, system-wide plan to build these skills, involving employers, training providers, government bodies and industry partners, supported where relevant by cost-benefit analysis. 

By anchoring the work within the broader Financial Services Growth and Competitiveness Strategy, the UK is placing skills development at the centre of its long-term competitiveness, signalling that the ability to operationalise AI safely and effectively will be as important as the technologies themselves.

What to Expect in 2026

With negotiations on the Digital Omnibus anticipated in early 2026 and adoption potentially following as soon as the first quarter, organisations should expect a year of transition, preparation and increased operational demands. High-risk AI rules will move closer to practical implementation, making it essential for organisations to monitor the publication of technical standards that will trigger compliance obligations. 

Cross-border digital processes are likely to undergo significant automation as pilots of the European Business Wallet begin to emerge. Cybersecurity incident reporting should become more harmonised and less duplicative as the single reporting interface progresses. Technical reforms to cookie consent frameworks are expected to change user experience and advertising models, particularly as browser-based preference management becomes the default. At the same time, regulators both within and outside the EU are converging on core principles of fairness, transparency and accountability in AI, placing growing emphasis on rigorous risk management.

For organisations operating across multiple jurisdictions, 2026 will be a pivotal year to strengthen internal governance, upgrade technical systems and prepare for a more interconnected and more demanding digital compliance environment.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 


We are here to help you, get in contact with us today for more information.

Next
Next

McDonald’s Poland Fined €3.8M: A GDPR Lesson in Controller–Processor Accountability