Claude AI Updates Explained: Privacy and Confidentiality at Stake?

On 28th September, Anthropic introduced significant updates to the Terms of Service and Privacy Policy for its popular AI model, Claude. While these updates may seem procedural, they mark a change in how user data is treated, and for many businesses, they could fundamentally alter what “confidentiality” means in the age of generative AI.

What Is Claude AI?

Claude is a large language model developed by Anthropic, an AI research company founded by former OpenAI employees. Like ChatGPT, Claude can generate text, summarise documents, draft emails, analyse data, and assist with complex reasoning tasks.

Claude is currently available through various access tiers, including Free, Pro, Team, and Enterprise accounts, and it can also be integrated through an API or platforms like Amazon Bedrock. Its versatility and reputation for producing nuanced, high-quality responses have made it one of the most popular AI tools among professionals, including lawyers, writers, and business strategists.

However, with the recent updates to Anthropic’s Terms of Service and Privacy Policy, users now face a critical question: how much control do they truly have over the data they share with Claude?

A Turning Point for Data and AI

Under the new framework, Claude AI will begin using data from all consumer accounts, including Free, Pro, and Team tiers, for training its models. Only organisations with Commercial or Enterprise contracts will remain fully exempt from this data training process.

This distinction may seem subtle, but it carries immense practical consequences. Thousands of small and medium-sized businesses relying on paid consumer accounts could now be unintentionally feeding sensitive information into AI training pipelines. In other words, paying for Claude Pro doesn’t necessarily mean you’re protected.

The Illusion of “Professional” Accounts

One of the most pressing issues is the misleading account classification. The term “Pro” intuitively suggests business-level privacy. Yet, in Anthropic’s new terms, Pro and Team accounts remain squarely within the consumer category.

This creates a compliance blind spot for firms handling regulated or sensitive data. A law firm using a $20 Pro account, for example, might believe it’s safeguarding client communications when in fact, those conversations may be incorporated into AI training datasets.

Meanwhile, true commercial protection applies only to Claude for Work, Enterprise, or API accounts governed by separate Commercial Terms. These contracts explicitly prohibit model training on customer data and include more stringent data processing and retention safeguards.

The updated policy also introduces a new five-year data retention window, a dramatic increase from previous terms. This change means that user interactions could persist far longer than before, raising questions about long-term data exposure, retrievability, and compliance with evolving privacy regulations like the GDPR or UK Data Protection Act.

Anthropic’s contract framework now resembles a multi-layered legal ecosystem, combining Terms of Service, Privacy Policy, and Usage Policy, each governing different user categories. The complexity means that understanding which documents apply to your account isn’t just administrative; it determines your data ownership rights, confidentiality obligations, and exposure risk.

Consumer users can technically “opt out” of training, but this setting is toggled on by default. Unless businesses actively review and update their preferences, they may already be participating in training without realising it.

What Businesses Should Do Now

For organisations using Claude or any generative AI platform, the implications are immediate.
Here’s what business leaders and compliance officers should prioritise:

  1. Identify your account type. Confirm whether your Claude access falls under consumer or commercial terms. Paid does not always mean protected.

  2. Review the Privacy Policy and opt-out options. Ensure training permissions are correctly configured for your use case.

  3. Reassess your data sharing practices. Avoid inputting sensitive, regulated, or confidential data into consumer-tier AI models.

  4. Negotiate contractual protections. For enterprise use, insist on formal Data Processing Agreements (DPAs) and Commercial Terms that explicitly restrict data use.

A Turning Point in the Governance of Artificial Intelligence

Anthropic’s recent policy update represents more than an isolated contractual revision, it may signal a broader transformation in how AI governance is evolving across the private sector. As artificial intelligence systems become increasingly embedded in professional and commercial workflows, the tension between technological advancement and data protection grows more complex and more consequential.

This shift highlights an emerging reality for professions built on confidentiality and trust, such as law, consulting, healthcare, and education. The question is no longer whether AI should be used, but how it can be governed responsibly. In this sense, data protection has moved beyond a matter of regulatory compliance. 

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 


We are here to help you, get in contact with us today for more information.

Next
Next

CNIL Fines Google €325M and SHEIN €150M for Breaking Cookie Rules