Skip to content
EU AI Act

EU AI Act Compliance Checklist 2026: Complete Guide

April 14, 2026·14 min read

The EU AI Act represents the most significant piece of artificial intelligence regulation in history. With full enforcement beginning on August 2, 2026, organizations worldwide are racing to understand what compliance actually requires. Whether you are a startup deploying a single chatbot or an enterprise running dozens of AI models across multiple business units, this comprehensive checklist will guide you through every requirement step by step.

This guide is organized around the seven compliance pillars that every organization must address. Each section includes specific action items, regulatory references, and practical guidance based on the final text of the regulation. Bookmark this page and use it as your living compliance roadmap.

Critical Deadlines You Cannot Miss

  • February 2, 2025: Prohibited AI practices ban already in effect
  • August 2, 2025: GPAI model obligations and governance rules active
  • August 2, 2026: Full enforcement of high-risk AI requirements, transparency duties, and penalty regime
  • August 2, 2027: Obligations for high-risk AI in Annex I regulated products
Check your compliance status now →

1. AI System Inventory and Classification

The foundation of EU AI Act compliance is knowing exactly what AI systems your organization uses and how each one is classified under the regulation's risk-based framework. Without a comprehensive inventory, compliance is impossible.

Building Your AI Inventory

Start by cataloging every AI component in your technology stack. This includes systems you have built in-house, third-party APIs with AI capabilities (such as OpenAI, Google Cloud AI, or AWS Bedrock), embedded AI features in SaaS products you use, and any automated decision-making systems. For each system, document its intended purpose, the data it processes, who it affects, and what role your organization plays (provider, deployer, importer, or distributor).

Risk Classification Framework

The EU AI Act classifies AI systems into four risk tiers, each with different compliance obligations:

  • Unacceptable Risk (Prohibited): Systems that perform social scoring, exploit vulnerable groups, use subliminal manipulation, or deploy untargeted facial recognition scraping. These must be discontinued immediately.
  • High Risk (Annex III): AI used in biometrics, critical infrastructure, education, employment, essential services (credit scoring, insurance), law enforcement, migration management, and democratic processes. These require full compliance with Articles 8 through 15.
  • Limited Risk: Chatbots, AI-generated content systems, emotion recognition, and biometric categorization. These require specific transparency disclosures under Article 50.
  • Minimal Risk: AI-enabled video games, spam filters, and similar low-impact systems. No specific obligations, though voluntary codes of conduct are encouraged.

Use CompliPilot's free compliance scanner to automatically classify your AI systems and identify compliance gaps across all four risk categories.

2. High-Risk System Requirements (Articles 8-15)

If any of your AI systems are classified as high-risk, the following seven requirements are mandatory. Each must be documented, implemented, and demonstrable to regulators.

  • Risk Management System (Art. 9): Implement a continuous, iterative process to identify, analyze, estimate, evaluate, and mitigate risks throughout the entire AI lifecycle. This is not a one-time assessment but an ongoing obligation.
  • Data Governance (Art. 10): Ensure training, validation, and testing datasets meet strict quality criteria. Document data provenance, preparation methods, labeling processes, and known biases or gaps.
  • Technical Documentation (Art. 11): Maintain comprehensive documentation covering system design, architecture, algorithms, training processes, performance metrics, known limitations, and intended use.
  • Record-Keeping (Art. 12):Implement automatic logging of system operations enabling traceability. Define retention periods appropriate to the system's risk level.
  • Transparency (Art. 13): Provide clear instructions to deployers covering capabilities, limitations, accuracy levels, foreseeable misuse scenarios, and maintenance requirements.
  • Human Oversight (Art. 14): Design systems enabling effective human oversight. Operators must be able to understand outputs, detect anomalies, and intervene or stop operation when necessary.
  • Accuracy, Robustness, and Cybersecurity (Art. 15): Declare accuracy levels, implement resilience against errors and adversarial attacks, and protect against cybersecurity threats.

Need help generating the required documentation? Use the compliance documentation templates to accelerate this process.

3. Transparency and Disclosure Obligations

Even if your AI system is not classified as high-risk, transparency obligations under Article 50 may still apply. These requirements affect the majority of organizations deploying AI.

  • AI Interaction Disclosure: Users must be informed when they are interacting with an AI system (chatbots, virtual assistants, automated customer service) unless it is obvious from context.
  • AI-Generated Content Labeling: Synthetic text, images, audio, and video must be labeled as AI-generated using machine-readable metadata where technically feasible.
  • Deepfake Disclosure: AI-manipulated or generated content depicting real persons or events must be clearly labeled.
  • Emotion Recognition Notice: If your AI performs emotion recognition, subjects must be informed before processing begins.

Run a transparency compliance scan to check whether your website and applications meet these disclosure requirements.

4. Conformity Assessment and Registration

Before placing a high-risk AI system on the EU market, you must complete a conformity assessment process. The type of assessment depends on your specific use case:

  • Internal Conformity Assessment: Most high-risk AI systems allow self-assessment against the requirements of Articles 8-15. You must document findings, test results, and corrective actions.
  • Third-Party Audit: Certain biometric identification and critical infrastructure AI systems require assessment by a notified body (independent audit organization).
  • EU Declaration of Conformity: Prepare and sign a formal written declaration for each high-risk system.
  • CE Marking: Affix the CE marking to compliant high-risk AI systems before market placement.
  • EU Database Registration: Register high-risk AI systems in the EU-wide database before deployment. Both providers and deployers have registration duties.

5. GPAI Model Obligations

If your organization develops or deploys general-purpose AI models (large language models, foundation models, multi-modal models), additional obligations have been in effect since August 2, 2025:

  • Model Documentation: Maintain model cards covering architecture, training methodology, data sources, evaluation results, and known limitations.
  • Copyright Compliance: Document copyrighted training data and comply with EU copyright law, including opt-out mechanisms for rights holders.
  • Systemic Risk Models: GPAI models with systemic risk (generally trained with over 10^25 FLOPs or designated by the AI Office) must conduct adversarial testing, implement incident reporting, and maintain adequate cybersecurity measures.

6. Penalty Structure: What Non-Compliance Costs

The EU AI Act introduces one of the strictest penalty regimes in technology regulation. Understanding the fine structure is essential for building a business case for compliance investment.

Violation TypeMaximum Fine
Prohibited AI practices (Art. 5)Up to €35M or 7% of global annual turnover
High-risk system requirements, GPAI obligationsUp to €15M or 3% of global annual turnover
Incorrect information to authoritiesUp to €7.5M or 1% of global annual turnover

For SMEs and startups, fines are capped at the lower of the two thresholds. National authorities also consider proportionality and economic viability when setting fine amounts.

7. Ongoing Monitoring and Governance

EU AI Act compliance is not a one-time project. The regulation requires continuous monitoring, incident response, and organizational governance throughout the AI system's entire lifecycle.

  • Post-Market Monitoring (Art. 72): Implement proportionate monitoring systems to collect and analyze data on AI performance in production. Track accuracy drift, bias emergence, and unexpected behaviors.
  • Serious Incident Reporting (Art. 73): Establish procedures to report serious incidents to market surveillance authorities within 15 days of becoming aware.
  • AI Literacy Training (Art. 4): Ensure all staff involved in AI development and deployment have sufficient AI literacy and training appropriate to their role.
  • Fundamental Rights Impact Assessment (Art. 27): Deployers of high-risk AI must conduct and document an assessment of impact on fundamental rights before putting the system into use.
  • Regular Compliance Audits: Schedule periodic reviews using automated tools to ensure continued compliance as your AI systems evolve and regulatory guidance is updated.

Quick-Start Compliance Checklist

If you are just beginning your compliance journey, here are the five highest-priority actions to take immediately:

  1. Inventory all AI systems your organization develops, deploys, or uses, including third-party AI services and embedded AI features.
  2. Classify each system by risk levelusing the regulation's four-tier framework. Use the interactive compliance checklist to track progress.
  3. Scan your public-facing AI systems for transparency and disclosure compliance gaps using an automated compliance scanner.
  4. Generate required documentation using standardized compliance templates covering risk management, technical documentation, and conformity declarations.
  5. Establish governance procedures for ongoing monitoring, incident reporting, and periodic auditing. Complete the audit readiness questionnaire to evaluate your organizational preparedness.

How the EU AI Act Interacts with GDPR

If your AI systems process personal data, you face overlapping obligations from both GDPR and the EU AI Act. Key areas of overlap include transparency requirements (both regulations require informing users about automated processing), impact assessments (GDPR's Data Protection Impact Assessment and the AI Act's risk assessment cover similar ground), and documentation obligations.

Organizations already GDPR-compliant have a significant head start, but the AI Act introduces entirely new requirements around risk classification, conformity assessment, and AI-specific documentation that go beyond data protection. Read our detailed GDPR vs EU AI Act comparison for a thorough analysis of how the two regulations interact.

Start Your Compliance Journey Today

The August 2026 deadline is approaching fast. CompliPilot runs 200+ automated compliance checks against EU AI Act, GDPR, and data protection requirements, giving you a prioritized action plan with specific fix recommendations.