Every second executive in Germany now uses AI tools daily. Many of them use Claude by Anthropic. What most don't realize: they're probably violating GDPR — and they don't even know it.
This isn't a theoretical risk. It's a ticking time bomb.
The Illusion of Standard Contractual Clauses
When companies use Claude via anthropic.com or the app, many rely on Standard Contractual Clauses (SCCs) — those contractual provisions designed to legitimize data transfers to third countries.
The problem: Since the Schrems II ruling by the CJEU (2020), SCCs alone are no longer sufficient. The European Court of Justice made it unequivocally clear that additional technical and organizational measures are required when the recipient country doesn't maintain an adequate level of data protection. And the United States — despite the EU-US Data Privacy Framework — remains problematic for many data protection authorities.
The uncomfortable truth: Anyone entering confidential business data, customer data, or personal information into Claude is transferring it to Anthropic's servers in the US. The SCCs in the terms of service are just paper — without a robust Transfer Impact Assessment (TIA) and additional protective measures, companies are on thin ice.
Dark Patterns: How Anthropic Manipulates Consent
It gets worse. In August 2025, Anthropic fundamentally changed its privacy policies — introducing an interface that EU data protection experts classify as a "dark pattern":
- A pre-checked toggle for "Allow Anthropic to use my chats for training"
- A large, black "Accept" button — visually dominant, inviting quick click-through
- The option to decline? A faint "Not now", barely visible
The European Data Protection Board (EDPB) has stated in its Guidelines on Deceptive Design Patterns that pre-ticked checkboxes and toggles do not constitute valid, unambiguous consent. Anthropic's consent interface directly contradicts GDPR requirements.
From 30 Days to 5 Years
Data retention was also massively extended: From 30 days to up to five years. Five years during which your business conversations, strategy documents, and customer information sit on Anthropic servers — potentially being used to train future models.
In plain terms:
- Trade secrets could flow into future models
- Personal data of your customers is stored for years — without their knowledge
- In the event of a data breach or government request, the data is readily available
The Cowork Problem: When AI Accesses Your Desktop
With Claude Cowork, Anthropic has taken a new step: the AI works directly on your computer — organizing files, analyzing documents, executing workflows.
Security researchers at PromptArmor have already demonstrated a critical vulnerability: through indirect prompt injection, a manipulated document can instruct the AI agent to exfiltrate sensitive data to external servers. In a corporate environment, this means: a single weaponized PDF is enough to extract confidential information.
For decision-makers working with confidential documents, this is a nightmare scenario.
Two-Tier Data Privacy
Particularly notable: Anthropic has introduced a two-tier system. Enterprise customers with expensive contracts are exempt from data use for training. Free and Pro users? They supply training material.
The message is clear: If you don't pay premium, you pay with your data.
For the German Mittelstand, where often the CEO themselves works with Claude Pro, this is fatal. The person entering the most sensitive business data — contracts, strategies, HR matters — has the worst data protection.
The Solution Exists — And It's Simpler Than You Think
The good news: Claude itself isn't the problem. The problem is how you access Claude.
AWS Bedrock: Claude in Europe, Under European Rules
Through Amazon Web Services Bedrock, Claude can be operated in the eu-central-1 (Frankfurt) region. This means:
- ✅ Data residency in the EU — Your data never leaves European soil
- ✅ No training on your data — AWS Bedrock doesn't use your inputs for model training
- ✅ DPA with AWS — Complete Data Processing Agreement, GDPR-compliant
- ✅ Encryption — In transit and at rest, with customer-managed keys (KMS)
- ✅ Audit trail — Full traceability via CloudWatch and CloudTrail
- ✅ In-region processing — Bedrock offers the option to keep requests strictly within one region
The difference is fundamental: Instead of sending your data to Anthropic, it stays in an AWS environment in Frankfurt — under a European legal framework, with an organization (AWS) that has demonstrated GDPR compliance for years.
What This Means in Practice
Consider two scenarios:
Scenario A (Claude directly): Your CFO enters a financial forecast into Claude. The data goes to Anthropic servers in the US. Stored for up to 5 years. May be used for model training. SCCs as the only "protection."
Scenario B (Claude via Bedrock): Your CFO enters the same forecast. The data stays in Frankfurt. Not stored after processing. No training. DPA, encryption, audit trail — all documented and auditable.
Same AI model. Same quality. Fundamentally different data protection.
What Companies Should Do Now
- Conduct an audit: Who in your organization uses Claude or other AI tools? Through which channel? With what data?
- Risk assessment: What data flows into these tools? Personal data? Trade secrets?
- Switch infrastructure: Move from direct access to GDPR-compliant infrastructure — whether through AWS Bedrock, Azure, or another European solution
- Establish policies: Clear AI usage policies defining what data may be entered into AI tools
- Documentation: Transfer Impact Assessments, processing records, and technical/organizational measures documented
Conclusion
Using Claude isn't the problem — the path to it is. Companies using Claude directly through Anthropic expose themselves to real GDPR risk. Companies running Claude through European infrastructure use the same technology — legally compliant, secure, and without compromises on quality.
At Cierra, that's exactly what we do: we run Claude GDPR-compliantly via AWS Bedrock in Frankfurt — daily, for ourselves and for our clients. No compromises, no smoke and mirrors.
If you want to know what this could look like for your company — get in touch.