Key Takeaways
- The EU AI Act applies extraterritorially to US companies whose AI systems produce outputs used within the EU, even with zero physical presence in Europe. One EU-based user can trigger full compliance obligations.
- Fines reach up to €35 million or 7% of global annual turnover (the higher amount), making the EU AI Act the most aggressive AI enforcement regime on the planet.
- The critical compliance deadline is August 2, 2026, when high-risk AI system requirements become enforceable. Prohibited AI practices have been banned since February 2025, and general-purpose AI transparency rules have been active since August 2025.
- US companies must classify every AI system by risk tier (prohibited, high-risk, limited-risk, minimal-risk), determine their role (provider vs. deployer), and build a quality management system with technical documentation, conformity assessments, and ongoing monitoring.
- This EU AI Act compliance checklist gives you a structured, actionable framework mapped to NIST AI RMF functions and ISO 31000 risk management principles so you can integrate EU compliance into your existing governance without building a parallel universe.
- Early movers gain contract advantages and competitive positioning. Companies that wait until mid-2026 face a scramble that increases both cost and enforcement exposure.
Why US Companies Need an EU AI Act Compliance Checklist
If your company builds, sells, or deploys AI systems and any of those systems produce outputs that reach someone in the European Union, the EU AI Act applies to you. Full stop. Your headquarters location offers zero protection. The Act’s jurisdictional trigger under Article 2 is
The Act follows the same extraterritorial playbook that made GDPR a global standard. Under Article 2, any provider that places an AI system on the EU market, and any provider or deployer whose AI system output is used within the EU, falls within scope. US companies shipping AI-powered recruiting tools, credit-scoring engines, customer service chatbots, or performance monitors to EU customers are directly exposed.
The financial stakes are substantial. Prohibited AI practices carry fines of up to €35 million or 7% of global annual turnover.
Most high-risk violations cap at €15 million or 3%. Even providing misleading information to authorities triggers €7.5 million or 1%. And those are per-violation penalties.
Beyond fines, authorities can order corrective actions, restrict or withdraw non-compliant systems from the EU market, and require public disclosures.
A market ban on your AI product in Europe is often more damaging than the fine itself. The risk assessment calculus is clear: compliance cost is a fraction of enforcement exposure.
EU AI Act Compliance Timeline: Key Dates US Companies Must Track
The EU AI Act uses a phased enforcement approach. Several deadlines have already passed. Below is the complete timeline with status indicators as of March 2026.
| Date | Milestone | Status | What US Companies Must Do |
| August 1, 2024 | EU AI Act enters into force | PASSED | Begin awareness, governance planning, and AI system inventory |
| February 2, 2025 | Prohibited AI practices banned; AI literacy obligation begins | PASSED — ACTIVE | Confirm no prohibited practices in your portfolio; launch AI literacy training across the organization |
| August 2, 2025 | GPAI transparency obligations; governance infrastructure (notified bodies, conformity system) operational | PASSED — ACTIVE | Comply with general-purpose AI documentation and transparency rules; provide downstream technical information |
| August 2, 2026 | Full enforcement: Annex III high-risk AI systems; conformity assessments; CE marking; EU database registration | UPCOMING — 5 MONTHS | Complete conformity assessments, finalize technical documentation, affix CE marking, register in EU database, deploy monitoring |
| August 2, 2027 | Legacy high-risk systems in regulated products; pre-August 2025 GPAI models must comply | UPCOMING | Bring all legacy and grandfathered systems into full compliance |
Important note: The European Commission proposed a ‘Digital Omnibus’ package in late 2025 that could postpone some Annex III high-risk obligations to December 2027.
Prudent risk management demands you treat August 2026 as the binding deadline until a formal extension is enacted.
EU AI Act Risk Classification System: Where Do Your AI Systems Land?
The EU AI Act operates on a four-tier, risk-based classification system. Your compliance obligations depend entirely on where each AI system falls within this pyramid. Getting classification wrong means applying too few controls (enforcement exposure) or too many (wasted resources).
| Risk Tier | Description | Examples Relevant to US Companies | Key Obligations |
| Unacceptable Risk (Prohibited) | AI practices banned outright as threats to fundamental rights | Social scoring; subliminal manipulation causing harm; exploitation of age/disability vulnerabilities; emotion recognition in workplaces/schools; real-time remote biometric ID in public spaces (with narrow law enforcement exceptions); untargeted facial recognition database scraping | STOP immediately. These practices have been banned since February 2025. No compliance pathway exists — only cessation. |
| High Risk (Annex III) | AI systems deployed in sensitive domains with significant impact on individuals | AI-powered hiring/recruiting tools; credit scoring and lending decisions; insurance pricing; educational assessment and admissions; worker management and performance evaluation; law enforcement risk assessment; migration and border control systems; critical infrastructure management | Conformity assessment; technical documentation; risk management system; data governance; human oversight; accuracy/robustness/cybersecurity; logging; transparency to users; EU database registration; CE marking; post-market monitoring; serious incident reporting |
| Limited Risk | AI systems that interact with or generate content to individuals | Customer service chatbots; AI-generated marketing content; deepfake generators; emotion recognition (non-prohibited contexts); biometric categorization (non-prohibited contexts) | Transparency obligations: clearly disclose that individuals are interacting with AI; label AI-generated content; mark deepfakes |
| Minimal Risk | AI systems with negligible risk to rights and safety | AI-powered spam filters; AI in video games; inventory management optimization; internal analytics dashboards | No specific EU AI Act obligations (general laws like GDPR still apply) |
The critical action here: map every AI system in your organization to a risk tier. Document the classification rationale with evidence.
If a system’s classification is ambiguous, the prudent approach is to classify upward and apply the stricter controls. This mirrors the conservative bias you would apply in any scenario-based risk assessment.
Provider vs. Deployer: Determining Your Role Under the EU AI Act
Your obligations under the EU AI Act depend heavily on your role in the AI value chain. The two primary roles are Provider and Deployer, and each carries distinct requirements.
| Dimension | Provider (Developer) | Deployer (User) |
| Definition | Develops an AI system or has one developed and places that system on the market under their own name or trademark | Uses an AI system under their authority in a professional capacity (not personal use) |
| Typical US Company Profile | SaaS companies selling AI-powered tools; AI platform providers; companies building custom AI solutions shipped to EU clients | US enterprises using third-party AI tools (hiring software, analytics platforms, credit scoring APIs) that affect EU individuals |
| Core Obligations (High-Risk) | Quality management system; technical documentation; conformity assessment; CE marking; EU database registration; post-market monitoring; serious incident reporting | Follow provider instructions; ensure representative input data; assign human oversight; monitor operations; retain logs 6+ months; report serious incidents; inform affected individuals |
| Authorized Representative | Must appoint an EU-based authorized representative if no EU establishment | Generally not required, but must cooperate with authorities |
| Penalty Exposure | Up to €35M / 7% turnover (prohibited); €15M / 3% (high-risk violations) | Same penalty framework applies based on violation type |
A common trap: If a US company significantly modifies a third-party AI system or puts a system on the EU market under their own name, they can be reclassified from deployer to provider, inheriting the full provider obligation set. Review your compliance risk indicators to catch these role-shift triggers early.
The EU AI Act Compliance Checklist: 10 Essential Steps
This is the core of what you came here to find. The following EU AI Act compliance checklist is organized into 10 action areas that cover the full scope of compliance requirements. Each step maps to NIST AI RMF functions and ISO 31000 principles so you can plug these into your existing governance framework.
Step 1: AI System Inventory and Classification
Map every AI system in production and development. Document the intended purpose, output types, EU exposure (do outputs reach EU users or decisions about EU individuals?), and data sources. Classify each system by risk tier using the four-tier framework above.
Produce a risk register with evidence supporting each classification. Flag any system that sits near a tier boundary. Complete this inventory within four weeks with legal and engineering sign-off.
Tools: Use your existing risk register framework, extending columns to capture AI-specific metadata.
Step 2: Role Determination (Provider vs. Deployer)
Decide provider vs. deployer status per system. This is not a company-level determination — a single company can be a provider on some systems and a deployer on others. Document the rationale.
Update contracts with sub-processors and EU customers to reflect role allocations. Align this with existing GDPR controller/processor structures to avoid duplicated governance.
Step 3: Prohibited Practices Screening
This should already be done — prohibited practices have been banned since February 2025. Run a formal screen across your entire AI portfolio to confirm zero exposure to: subliminal manipulation, exploitation of vulnerable groups, social scoring, predictive policing by profiling, emotion recognition in workplaces/schools, untargeted facial recognition scraping, and real-time remote biometric identification in public spaces. Document the screening results. If any system is flagged, cease operations immediately and engage legal counsel.
Step 4: AI Literacy Program
The EU AI Act requires all providers and deployers to ensure sufficient AI literacy among staff. This obligation has been active since February 2025.
Implement training covering: what AI systems your organization uses, the risks those systems present, the EU AI Act’s requirements relevant to each employee’s role, and how to exercise human oversight effectively. Track completion rates as a compliance KRI.
Step 5: Risk Management System (High-Risk Systems)
High-risk AI systems must operate within a documented risk management system that runs throughout the entire AI lifecycle.
This system must: identify and analyze known and foreseeable risks; estimate and evaluate risks that may emerge during intended use and reasonably foreseeable misuse; evaluate risks based on post-market monitoring data; and adopt suitable risk management measures.
This aligns directly with the ISO 31000 risk assessment process you already know: identify, analyze, evaluate, treat, monitor.
Step 6: Data Governance and Bias Management
Training, validation, and testing datasets must meet quality criteria. You must examine data collection processes, assess data gaps and shortcomings, establish data preparation protocols (annotation, labeling, cleaning, enrichment), and identify potential biases.
The Act explicitly requires that training datasets be representative and as free from errors as possible. Build bias testing into your CI/CD pipeline and track fairness metrics using your KRI dashboard.
Step 7: Technical Documentation and Conformity Assessment
Providers of high-risk AI systems must produce comprehensive technical documentation before placing the system on the EU market.
The documentation must cover: system description and intended purpose; design specifications and development methodology; data governance and training procedures; performance metrics and accuracy levels;
risk management measures; human oversight provisions; and cybersecurity specifications. Complete a conformity assessment (self-assessment or third-party, depending on the domain), issue an EU declaration of conformity, and affix the CE marking.
Step 8: Human Oversight Mechanisms
High-risk AI systems must be designed to allow effective human oversight. Individuals assigned oversight must: fully understand system capabilities and limitations; be able to correctly interpret outputs; be able to decide not to use the system or disregard its output; and be able to intervene or stop the system.
Document these mechanisms and train oversight personnel. Track oversight intervention rates as a KRI — zero interventions over extended periods may signal rubber-stamping rather than genuine oversight.
Step 9: EU Database Registration and Authorized Representative
Providers (and certain deployers) of high-risk AI systems must register in the EU database before placing the system on the market. US companies with no EU establishment must appoint an authorized representative in the EU.
This representative acts as the point of contact with supervisory authorities and can be subject to enforcement actions. Select a representative with AI regulatory expertise, not just a registered agent. Align this with your existing GDPR representative structure where possible.
Step 10: Post-Market Monitoring and Incident Reporting
Compliance does not end at deployment. Providers must establish and document a post-market monitoring system proportionate to the nature of the AI system and its risks.
Serious incidents must be reported to the relevant market surveillance authority. Deployers must retain system-generated logs at minimum six months.
Build continuous monitoring into your enterprise risk management framework with automated alerting when KRI thresholds breach.
General-Purpose AI (GPAI) Model Requirements
General-purpose AI models — think large language models and foundation models — carry their own obligation set that has been active since August 2025. The July 2025 GPAI Code of Practice provides the operational framework.
| Requirement | All GPAI Models | GPAI with Systemic Risk (Additional) |
| Technical Documentation | Maintain documentation covering model architecture, training procedures, and performance characteristics to the EU AI Office | Enhanced documentation including model evaluation results and adversarial testing outcomes |
| Downstream Provider Support | Furnish technical information enabling developers building on the model to comply with their own AI Act obligations | Same, plus detailed risk profile information |
| Copyright Compliance | Implement policies respecting EU copyright law; identify any rights reservations under the Copyright Directive | Same requirements |
| Systemic Risk Assessment | Not required | Perform model evaluations including adversarial testing to identify and mitigate systemic risks |
| Incident Reporting | Not required | Report serious incidents to the EU AI Office without undue delay |
| Cybersecurity | Not required | Ensure adequate cybersecurity protections proportionate to the model’s risk profile |
US companies developing or deploying foundation models should evaluate the GPAI Code of Practice published in August 2025 and determine the compliance pathway.
The Code provides a voluntary compliance mechanism, but non-compliance with the underlying GPAI requirements carries fines of up to €15 million or 3% of global turnover.
EU AI Act vs. US AI Regulation: A Comparative Framework
US companies need to understand how the EU AI Act sits alongside existing US frameworks. The good news: significant overlap exists, meaning compliance investments serve multiple jurisdictions.
| Dimension | EU AI Act | NIST AI RMF (US) | US State Laws (NYC LL144, CO SB24-205, etc.) |
| Legal Status | Binding regulation with enforcement penalties | Voluntary framework (but increasingly referenced by regulators) | Binding laws with varying scope and enforcement mechanisms |
| Scope | All AI systems affecting EU individuals; extraterritorial | Recommended practice; no jurisdictional mandate | Varies: employment AI (NYC), high-risk consumer decisions (CO), sector-specific |
| Risk Classification | Four tiers: prohibited, high-risk, limited, minimal | Context-dependent; no fixed tiers | Varies by law; generally focus on high-impact decisions |
| Bias/Fairness | Mandatory data governance and bias testing on high-risk systems | Recommended: fairness testing, stakeholder engagement, bias monitoring | NYC: mandatory annual bias audit; CO: disclosure and impact assessment |
| Documentation | Mandatory technical documentation, conformity assessment, CE marking | Recommended: model cards, datasheets, risk documentation | Varies: NYC requires public audit summary; others vary |
| Penalties | Up to €35M / 7% global turnover | None (voluntary) | Varies: NYC up to $1,500/violation/day; CO private right of action |
| Human Oversight | Mandatory design requirements enabling effective human oversight | Recommended principle | Generally required disclosure of AI use in decision-making |
The strategic play: build your compliance program anchored to the EU AI Act (the strictest standard) and NIST AI RMF (the most structured voluntary framework).
This combination will satisfy or substantially address most current and emerging US state requirements.
Layer in sector-specific US obligations (EEOC, CFPB, FTC guidance) as needed. This is the same approach smart organizations used with GDPR — comply with the strictest regime, and everywhere else becomes easier.
Key Risk Indicators to Monitor EU AI Act Compliance
Compliance without monitoring is a snapshot, not a program. The following KRIs should be integrated into your existing compliance KRI dashboard to provide continuous visibility into your EU AI Act compliance posture.
| KRI | Measurement | Threshold (Example) | Escalation Path |
| AI System Classification Coverage | % of AI systems in inventory classified by risk tier | < 100% = Amber; < 80% = Red | Amber: Expedite reviews; Red: CRO briefing |
| Conformity Assessment Completion Rate | % of high-risk systems with completed conformity assessment | < 100% by Aug 2026 = Red | Red: Immediate project escalation to GC + CTO |
| AI Literacy Training Completion | % of relevant employees who completed AI literacy training | < 90% = Amber; < 75% = Red | Amber: HR escalation; Red: Board reporting |
| Bias Testing Coverage | % of high-risk systems with current bias test results | < 100% = Amber; < 80% = Red | Amber: Data science sprint; Red: Deployment pause |
| Post-Market Monitoring Alert Rate | Number of monitoring alerts per system per quarter | > 10 = Amber; > 25 = Red | Amber: Model review; Red: Revalidation |
| Serious Incident Reporting Timeliness | % of serious incidents reported within required timeframe | < 100% = Red | Red: Immediate process review and GC notification |
| Third-Party AI Vendor Compliance Rate | % of third-party AI vendors with documented EU AI Act compliance | < 100% = Amber; < 70% = Red | Amber: Vendor engagement; Red: Contract review |
| Human Oversight Intervention Rate | Frequency of human overrides on high-risk system decisions | 0% over 30 days = Amber (potential rubber-stamping) | Amber: Oversight design review |
These KRIs complement your broader regulatory compliance KRI framework. Report them monthly to the AI compliance owner and quarterly to the board alongside your financial risk indicators.
90-Day EU AI Act Compliance Roadmap
With the August 2026 deadline approximately five months away, the following roadmap gives you a structured sprint path from awareness to operational readiness.
| Phase | Timeline | Key Activities | Deliverables |
| Phase 1: Discovery and Classification | Days 1–30 | Complete AI system inventory across all business units. Classify each system by risk tier. Determine provider/deployer role per system. Screen all systems against prohibited practices list. Identify EU touchpoints (users, data subjects, output recipients). Appoint AI compliance owner. Engage authorized representative in EU. | AI System Inventory Register; Risk Classification Matrix; Role Determination Log; Prohibited Practices Screening Report; Authorized Representative Agreement |
| Phase 2: Gap Analysis and Documentation | Days 31–60 | Run gap analysis against Section 2 requirements on high-risk systems. Draft risk management system documentation. Begin technical documentation packages. Conduct initial bias testing. Deploy AI literacy training. Review and update vendor contracts with EU AI Act clauses. Build KRI dashboard framework. | Gap Analysis Report (per high-risk system); Risk Management System (Draft); Technical Documentation (Draft); Bias Testing Baseline; Training Completion Records; Contract Amendment Templates; KRI Dashboard Wireframe |
| Phase 3: Operationalize and Validate | Days 61–90 | Complete conformity assessments on highest-risk systems. Finalize technical documentation. Register in EU database. Affix CE marking. Deploy post-market monitoring. Establish incident reporting workflow. Launch KRI dashboard with automated feeds. Conduct tabletop exercise testing incident response. Schedule independent audit. | Conformity Assessment Reports; EU Declaration of Conformity; CE Marked Systems; EU Database Registrations; Post-Market Monitoring SOP; Incident Response Playbook; KRI Dashboard (Live); Tabletop Exercise Report; Audit Engagement Letter |
This roadmap follows the same project risk assessment discipline you would apply to any major compliance initiative: clear scope, phased delivery, named owners, and measurable milestones. Track the 90-day plan as a formal project with weekly status reviews.
Common Pitfalls US Companies Make with EU AI Act Compliance
After advising on AI governance programs, these are the failure patterns that surface most frequently among US organizations approaching the EU AI Act.
- Assuming US Headquarters Means EU Rules Do Not Apply: This is the number-one mistake. The Act’s jurisdictional trigger is system output reaching the EU, not corporate domicile. A SaaS platform with even one EU user generating AI-assisted outputs is in scope. Map your EU exposure before assuming exemption.
- Treating GDPR Compliance as Sufficient: GDPR covers data protection. The EU AI Act covers system safety, transparency, bias, and accountability. Different regulatory objectives, different obligations, different documentation requirements. Your Data Protection Officer cannot own this alone. You need AI-specific governance expertise.
- Waiting Until Guidance Finalizes: The GPAI Code of Practice published in July 2025 filled many remaining gaps. Core obligations have been clear since the Act entered into force. Companies waiting until every guideline is finalized will find themselves scrambling in mid-2026 with no time margin. Start now, refine as guidance evolves.
- Confusing ‘Human in the Loop’ with Human Oversight: The Act requires that humans assigned oversight can genuinely understand, interpret, intervene, and override AI decisions. Having a human technically ‘in the loop’ who rubber-stamps outputs does not satisfy the requirement. Design meaningful oversight mechanisms and track intervention rates.
- Ignoring Third-Party AI Risk: Many US companies deploy pre-built AI models, APIs, and vendor tools without conducting any due diligence on EU AI Act compliance. Roughly 20% of organizations using third-party AI tools do not assess those tools’ risks at all. Extend your third-party risk management to every AI vendor in your supply chain.
- Documentation Without Substance: In an enforcement audit, ‘we do this’ without supporting evidence equals ‘we do not do this.’ Every compliance claim must be backed by documentation: risk assessments, test results, monitoring logs, training records, and incident reports. Build the evidence trail from day one.
Looking Ahead: What US Companies Should Prepare to Address
Enforcement Will Accelerate After August 2026
National competent authorities have been operational since 2025. Post-August 2026, expect enforcement activity to ramp rapidly, similar to the post-GDPR surge.
Early enforcement actions will likely target high-visibility sectors (employment AI, financial services, healthcare) where consumer impact is most tangible and political pressure is highest.
The Brussels Effect Will Spread
Just as GDPR became the de facto global privacy standard, the EU AI Act is already shaping regulatory thinking across the G7, OECD, and individual US states.
Over 65 nations have published national AI strategies, and most are adapting the EU’s risk-based approach rather than creating entirely new frameworks. Building to the EU standard now positions your organization advantageously as other jurisdictions align.
AI Agents and Autonomous Systems Will Require New Controls
The current Act was written primarily with predictive AI and generative AI in mind. Agentic AI systems — where models plan, execute multi-step tasks, and interact with other systems autonomously — introduce governance gaps the regulation has not fully addressed.
Expect supplementary guidance from the EU AI Office. Build flexibility into your compliance framework to accommodate evolving requirements.
The organizations that thrive will be those that treat EU AI Act compliance not as a one-time project but as an ongoing operational capability integrated into their enterprise risk management framework and continuously refined.
Take Action Today
Start with Step 1: inventory every AI system in your organization and classify by risk tier. The EU AI Act compliance checklist above gives you the complete action sequence.
The 90-day roadmap provides the timeline. The KRI framework provides the monitoring mechanism. August 2026 is five months away.
The companies that start now will have compliance locked in before enforcement actions begin. The companies that wait will be playing catch-up at premium cost.
Explore more practitioner frameworks across enterprise risk management, AI governance, and business continuity at riskpublishing.com. Subscribe to receive new articles, templates, and tools delivered to your inbox.
References
Internal Resources (riskpublishing.com):
- A Step-by-Step Guide to Risk Assessment
- Key Risk Indicators Examples
- How to Use a KRI Dashboard
- Compliance Key Risk Indicators Examples
- Financial Key Risk Indicators Examples
- Scenario-Based Risk Assessment
- Eight Steps for Conducting a Project Risk Assessment
- How to Conduct Risk Assessment
- 13 Best Practices for Regulatory Compliance KRI
- Regulatory Compliance Key Risk Indicators
- Best Key Risk Indicators
- Risk Mitigation in Project Management
- NIST Cybersecurity Framework Key Risk Indicators
- Key Risk Indicators for AML and Financial Crime Compliance
- Personnel Risk Assessment
- CRAMM Risk Assessment
External Authoritative Sources:
- EU AI Act Full Text (Regulation 2024/1689)
- EU AI Act Explorer (Article 99: Penalties)
- EU AI Act Implementation Timeline
- NIST AI Risk Management Framework
- NIST AI 600-1: Generative AI Profile
- ISO/IEC 42001:2023 — AI Management System
- ISO 31000:2018 — Risk Management Guidelines
- NYC Local Law 144 — Automated Employment Decision Tools
- European Commission — GPAI Code of Practice
- Orrick — 6 Steps Before August 2026
- Trilateral Research — EU AI Act Compliance Timeline
- National Law Review — Extraterritorial Scope
- Quinn Emanuel — Initial Prohibitions Under EU AI Act

Chris Ekai is a Risk Management expert with over 10 years of experience in the field. He has a Master’s(MSc) degree in Risk Management from University of Portsmouth and is a CPA and Finance professional. He currently works as a Content Manager at Risk Publishing, writing about Enterprise Risk Management, Business Continuity Management and Project Management.
