AI and GDPR Compliance in 2026: A Practical Data Protection Guide


The regulatory window is closing. In 2025, European data protection authorities issued a combined total of over €1.15 billion in GDPR fines — a record since the regulation took effect. Now the EU AI Act is layering on additional obligations: since February 2025, certain AI practices are already prohibited, and on 2 August 2026, full requirements for high-risk AI systems come into force with penalties reaching €35 million or 7% of global annual turnover.
If your company uses AI to process customer data, run HR tools, perform credit scoring, or handle medical information, this regulatory pressure already applies to you. Compliance is achievable. This guide gives you the concrete steps to get there.
Understand the Dual Framework: GDPR + EU AI Act
Most organisations are familiar with GDPR. Fewer have mapped how the EU AI Act adds a second layer of obligations — both apply simultaneously when AI systems process personal data.
GDPR (Regulation 2016/679) governs how personal data is processed — on what legal basis, for how long, and with what rights for data subjects. For AI systems, the key provisions are:
- Article 5 — processing principles (minimisation, purpose limitation, accuracy)
- Article 22 — rights in automated decision-making
- Article 35 — mandatory Data Protection Impact Assessment (DPIA)
EU AI Act (Regulation 2024/1689) classifies AI systems by risk level and sets technical and documentation requirements. High-risk AI systems under Annex III — including recruitment tools, credit scoring, medical diagnostics, and educational assessment — must complete a full conformity assessment before 2 August 2026.
Critical point: when a high-risk AI system processes personal data, you need both a GDPR DPIA under Article 35 and a Fundamental Rights Impact Assessment (FRIA) under AI Act Article 27. Run these as a single integrated process — not separately.
Step 1: Inventory Your AI Systems
You cannot protect what you have not identified. Many organisations are surprised by how many AI tools they actually use — from ChatGPT for internal communication to automated candidate screening systems.
What to Include
Document every AI system, including:
- Third-party tools integrated into your workflows (CRM, HRM, finance platforms)
- Generative AI tools used by employees
- AI capabilities embedded in existing software (e.g. predictive analytics in your ERP)
For each system, record: risk category under the AI Act, legal basis for data processing under GDPR, and whether special category data is involved.
Risk Classification
Annex III of the AI Act covers eight high-risk categories. Those most commonly relevant to businesses include:
- Recruitment and personnel selection systems
- Credit scoring tools
- AI in healthcare and medical diagnosis
- Assessment and evaluation systems in education
If a system does not fall into any of these categories and poses no risk to fundamental rights, it likely qualifies as minimal risk and only requires basic AI literacy measures.
Step 2: Conduct a DPIA Before Deployment
A DPIA is not a formality — it is a legal obligation under GDPR Article 35 whenever processing is likely to result in high risk to individuals' rights. Skipping it is one of the most common causes of regulatory fines: in Croatia alone, the DPA issued a €4.5 million fine partly due to the absence of a risk assessment for a data transfer.
When a DPIA Is Mandatory
You must conduct a DPIA when processing involves:
- Systematic processing of special category data (health, biometric, racial origin)
- Automated profiling with legal or similarly significant effects on individuals
- Large-scale monitoring of publicly accessible spaces
- Any high-risk AI system under the AI Act that processes personal data
DPIA Structure for AI Systems
A sound DPIA for AI covers four areas:
- Description of processing — what data, for what purpose, on which legal basis (Art. 6 GDPR), who is controller and who is processor
- Necessity and proportionality — is the processing proportionate to the purpose? Has data minimisation been applied?
- Risk assessment — concrete threats (inaccurate decisions, discrimination, data leakage, privacy violations), their likelihood and severity
- Risk mitigation measures — technical and organisational controls; whether residual risk is acceptable
For high-risk AI systems under the AI Act, extend this structure with FRIA elements: impact on vulnerable groups, algorithmic bias, and decision transparency.
Step 3: Build the Right Technical Controls
Regulatory compliance requires concrete technical solutions — not just policies and documentation.
Data Minimisation and Pseudonymisation
AI models perform better with more data, but GDPR mandates minimisation. The balance is achievable: train on pseudonymised datasets and separate identifiers from analytical data. The EDPB's April 2025 opinion confirms that large language models rarely meet full anonymisation standards — when in doubt, treat the data as personal.
Encryption and Access Control
- Encryption at rest and in transit (AES-256 for storage, TLS 1.3 for transfer)
- Role-based access control — staff access only the data required for their role
- Full audit trail on all access to sensitive data
On-Premise, Hybrid Cloud, or SaaS?
Your infrastructure choice directly affects compliance. On-premise solutions keep data within the EU and give you full control but require internal capacity. Hybrid cloud lets you process sensitive data locally while offloading lower-risk workloads to the cloud. For SaaS AI tools, always verify: which jurisdiction processes your data, whether the provider has signed a Data Processing Agreement (DPA), and whether cross-border transfers outside the EU are covered by Standard Contractual Clauses (SCCs).
The enforcement record is clear on this: the EDPB's €530 million fine against TikTok in 2025 was specifically for unlawful transfers of European user data. Check the contracts with your AI vendors.
Privacy by Design
Privacy by Design is not optional — under GDPR Article 25, it is an obligation. In practice this means:
- Data protection built into the system's architecture from the start
- Default settings that provide the highest level of privacy
- No collecting data "just in case" for future uses
Step 4: Ensure Transparency and Data Subject Rights
GDPR gives individuals concrete rights that AI systems complicate but do not remove.
Automated Decision-Making
Article 22 GDPR gives individuals the right not to be subject to decisions based solely on automated processing where those decisions have legal or similarly significant effects. For HR AI, credit scoring, and similar systems, you must provide: the right to human review, the right to contest the decision, and a clear explanation of the logic applied.
This is particularly relevant now — the EDPB has chosen compliance with transparency and information obligations (Articles 12-14 GDPR) as the focus of its 2026 coordinated enforcement action.
Informing Data Subjects
If an AI system processes data about your customers or employees, they must be informed: what data, on what basis, for what purpose, and whether automated decisions are made. That notice must be clear and accessible — not buried in 40 pages of terms and conditions.
Step 5: Implement Data Governance and Monitoring
One-time compliance is not enough. AI systems evolve, regulations are updated, and risks shift.
Records and Documentation
GDPR requires a Record of Processing Activities. For AI systems, this should also include: model version, legal basis, training data details, and retention periods.
Under the AI Act, high-risk systems require additional technical documentation per Annex IV and automated logging for traceability.
Regular Audits
Plan at minimum annually:
- Review the legal basis for each processing activity
- Bias testing on AI models
- Review of contracts with AI tool vendors
- Update DPIAs following material changes to the system
Who Is Responsible?
If your organisation processes personal data at large scale or systematically — you likely need a Data Protection Officer (DPO). A DPO is not just a GDPR requirement; for high-risk AI systems, they are a practical necessity for coordinating between legal, IT, and business teams.
Common Mistakes and How to Avoid Them
Mistake | Consequence | Fix |
|---|---|---|
Deploying AI without a DPIA | Direct violation of Art. 35 GDPR; fines up to €20M | Conduct DPIA before deployment, not after |
Using an AI vendor without a signed DPA | Violation of Art. 28 GDPR; significant fine risk | Require a signed DPA before any use |
LLM tools without checking data handling | Unclear legal basis; potential data breach exposure | Use enterprise-tier versions with guaranteed data isolation |
Automated HR decisions without appeal mechanism | Violation of Art. 22 GDPR | Build a mandatory human review step into the workflow |
Skipping AI system risk classification | High-risk systems: fines up to €15M under AI Act from August 2026 | Inventory and classify now, not at the deadline |
Deadlines You Cannot Miss
- 2 August 2026 — full application of EU AI Act requirements for high-risk AI systems; conformity assessments completed, CE marking affixed, registration in EU database required
- 2026 (ongoing) — EDPB and European Commission to publish joint guidelines on the interplay between the AI Act and GDPR
- 2026 (coordinated action) — EDPB auditing compliance with transparency obligations under GDPR Articles 12-14 across the EU
For complex systems, a conformity assessment typically takes six to twelve months. If your company uses AI that falls under the Annex III high-risk categories, the time to prepare is now.
Data protection in AI is not just a compliance cost — it is a competitive differentiator. Your clients and partners increasingly ask how you handle their data. Organisations with a clear strategy earn trust and avoid regulatory exposure at the same time.
If you are not sure where to start, our AI strategy and consulting team can review your AI systems and identify compliance gaps. You can also assess your company's AI readiness with our free tool.
For specific questions or a consultation, get in touch.
Sources
- EDPB 2025 Annual Report (published 9 April 2026) — https://www.edpb.europa.eu/
- EU AI Act (Regulation 2024/1689) — https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- EDPB Opinion on AI Models (December 2024) — https://www.edpb.europa.eu/news/news/2024/edpb-opinion-ai-models-gdpr-principles-support-responsible-ai_en
- CMS Enforcement Tracker Report 2025 — https://cms-lawnow.com/en/ealerts/2026/01/2025-in-data-protection
- Osborne Clarke Regulatory Outlook January 2026 — https://www.osborneclarke.com/insights/regulatory-outlook-january-2026-data-law
- Secure Privacy: GDPR Compliance Guide 2026 — https://secureprivacy.ai/blog/gdpr-compliance-2026
- Secure Privacy: EU AI Act 2026 Compliance — https://secureprivacy.ai/blog/eu-ai-act-2026-compliance
