EU AI Act Compliance Checklist for US Companies

Photo of author
Written By Chris Ekai

Key Takeaways

  • The EU AI Act applies extraterritorially to US companies whose AI systems produce outputs used within the EU, even with zero physical presence in Europe. One EU-based user can trigger full compliance obligations.
  • Fines reach up to €35 million or 7% of global annual turnover (the higher amount), making the EU AI Act the most aggressive AI enforcement regime on the planet.
  • The critical compliance deadline is August 2, 2026, when high-risk AI system requirements become enforceable. Prohibited AI practices have been banned since February 2025, and general-purpose AI transparency rules have been active since August 2025.
  • US companies must classify every AI system by risk tier (prohibited, high-risk, limited-risk, minimal-risk), determine their role (provider vs. deployer), and build a quality management system with technical documentation, conformity assessments, and ongoing monitoring.
  • This EU AI Act compliance checklist gives you a structured, actionable framework mapped to NIST AI RMF functions and ISO 31000 risk management principles so you can integrate EU compliance into your existing governance without building a parallel universe.
  • Early movers gain contract advantages and competitive positioning. Companies that wait until mid-2026 face a scramble that increases both cost and enforcement exposure.

Why US Companies Need an EU AI Act Compliance Checklist

If your company builds, sells, or deploys AI systems and any of those systems produce outputs that reach someone in the European Union, the EU AI Act applies to you. Full stop. Your headquarters location offers zero protection. The Act’s jurisdictional trigger under Article 2 is

The Act follows the same extraterritorial playbook that made GDPR a global standard. Under Article 2, any provider that places an AI system on the EU market, and any provider or deployer whose AI system output is used within the EU, falls within scope. US companies shipping AI-powered recruiting tools, credit-scoring engines, customer service chatbots, or performance monitors to EU customers are directly exposed.

The financial stakes are substantial. Prohibited AI practices carry fines of up to €35 million or 7% of global annual turnover.

Most high-risk violations cap at €15 million or 3%. Even providing misleading information to authorities triggers €7.5 million or 1%. And those are per-violation penalties.

Beyond fines, authorities can order corrective actions, restrict or withdraw non-compliant systems from the EU market, and require public disclosures.

A market ban on your AI product in Europe is often more damaging than the fine itself. The risk assessment calculus is clear: compliance cost is a fraction of enforcement exposure.

EU AI Act Compliance Timeline: Key Dates US Companies Must Track

The EU AI Act uses a phased enforcement approach. Several deadlines have already passed. Below is the complete timeline with status indicators as of March 2026.

DateMilestoneStatusWhat US Companies Must Do
August 1, 2024EU AI Act enters into forcePASSEDBegin awareness, governance planning, and AI system inventory
February 2, 2025Prohibited AI practices banned; AI literacy obligation beginsPASSED — ACTIVEConfirm no prohibited practices in your portfolio; launch AI literacy training across the organization
August 2, 2025GPAI transparency obligations; governance infrastructure (notified bodies, conformity system) operationalPASSED — ACTIVEComply with general-purpose AI documentation and transparency rules; provide downstream technical information
August 2, 2026Full enforcement: Annex III high-risk AI systems; conformity assessments; CE marking; EU database registrationUPCOMING — 5 MONTHSComplete conformity assessments, finalize technical documentation, affix CE marking, register in EU database, deploy monitoring
August 2, 2027Legacy high-risk systems in regulated products; pre-August 2025 GPAI models must complyUPCOMINGBring all legacy and grandfathered systems into full compliance

Important note: The European Commission proposed a ‘Digital Omnibus’ package in late 2025 that could postpone some Annex III high-risk obligations to December 2027.

Prudent risk management demands you treat August 2026 as the binding deadline until a formal extension is enacted.

EU AI Act Risk Classification System: Where Do Your AI Systems Land?

The EU AI Act operates on a four-tier, risk-based classification system. Your compliance obligations depend entirely on where each AI system falls within this pyramid. Getting classification wrong means applying too few controls (enforcement exposure) or too many (wasted resources).

Risk TierDescriptionExamples Relevant to US CompaniesKey Obligations
Unacceptable Risk (Prohibited)AI practices banned outright as threats to fundamental rightsSocial scoring; subliminal manipulation causing harm; exploitation of age/disability vulnerabilities; emotion recognition in workplaces/schools; real-time remote biometric ID in public spaces (with narrow law enforcement exceptions); untargeted facial recognition database scrapingSTOP immediately. These practices have been banned since February 2025. No compliance pathway exists — only cessation.
High Risk (Annex III)AI systems deployed in sensitive domains with significant impact on individualsAI-powered hiring/recruiting tools; credit scoring and lending decisions; insurance pricing; educational assessment and admissions; worker management and performance evaluation; law enforcement risk assessment; migration and border control systems; critical infrastructure managementConformity assessment; technical documentation; risk management system; data governance; human oversight; accuracy/robustness/cybersecurity; logging; transparency to users; EU database registration; CE marking; post-market monitoring; serious incident reporting
Limited RiskAI systems that interact with or generate content to individualsCustomer service chatbots; AI-generated marketing content; deepfake generators; emotion recognition (non-prohibited contexts); biometric categorization (non-prohibited contexts)Transparency obligations: clearly disclose that individuals are interacting with AI; label AI-generated content; mark deepfakes
Minimal RiskAI systems with negligible risk to rights and safetyAI-powered spam filters; AI in video games; inventory management optimization; internal analytics dashboardsNo specific EU AI Act obligations (general laws like GDPR still apply)

The critical action here: map every AI system in your organization to a risk tier. Document the classification rationale with evidence.

If a system’s classification is ambiguous, the prudent approach is to classify upward and apply the stricter controls. This mirrors the conservative bias you would apply in any scenario-based risk assessment.

Provider vs. Deployer: Determining Your Role Under the EU AI Act

Your obligations under the EU AI Act depend heavily on your role in the AI value chain. The two primary roles are Provider and Deployer, and each carries distinct requirements.

DimensionProvider (Developer)Deployer (User)
DefinitionDevelops an AI system or has one developed and places that system on the market under their own name or trademarkUses an AI system under their authority in a professional capacity (not personal use)
Typical US Company ProfileSaaS companies selling AI-powered tools; AI platform providers; companies building custom AI solutions shipped to EU clientsUS enterprises using third-party AI tools (hiring software, analytics platforms, credit scoring APIs) that affect EU individuals
Core Obligations (High-Risk)Quality management system; technical documentation; conformity assessment; CE marking; EU database registration; post-market monitoring; serious incident reportingFollow provider instructions; ensure representative input data; assign human oversight; monitor operations; retain logs 6+ months; report serious incidents; inform affected individuals
Authorized RepresentativeMust appoint an EU-based authorized representative if no EU establishmentGenerally not required, but must cooperate with authorities
Penalty ExposureUp to €35M / 7% turnover (prohibited); €15M / 3% (high-risk violations)Same penalty framework applies based on violation type

A common trap: If a US company significantly modifies a third-party AI system or puts a system on the EU market under their own name, they can be reclassified from deployer to provider, inheriting the full provider obligation set. Review your compliance risk indicators to catch these role-shift triggers early.

The EU AI Act Compliance Checklist: 10 Essential Steps

This is the core of what you came here to find. The following EU AI Act compliance checklist is organized into 10 action areas that cover the full scope of compliance requirements. Each step maps to NIST AI RMF functions and ISO 31000 principles so you can plug these into your existing governance framework.

Step 1: AI System Inventory and Classification

Map every AI system in production and development. Document the intended purpose, output types, EU exposure (do outputs reach EU users or decisions about EU individuals?), and data sources. Classify each system by risk tier using the four-tier framework above.

Produce a risk register with evidence supporting each classification. Flag any system that sits near a tier boundary. Complete this inventory within four weeks with legal and engineering sign-off.

Tools: Use your existing risk register framework, extending columns to capture AI-specific metadata.

Step 2: Role Determination (Provider vs. Deployer)

Decide provider vs. deployer status per system. This is not a company-level determination — a single company can be a provider on some systems and a deployer on others. Document the rationale.

Update contracts with sub-processors and EU customers to reflect role allocations. Align this with existing GDPR controller/processor structures to avoid duplicated governance.

Step 3: Prohibited Practices Screening

This should already be done — prohibited practices have been banned since February 2025. Run a formal screen across your entire AI portfolio to confirm zero exposure to: subliminal manipulation, exploitation of vulnerable groups, social scoring, predictive policing by profiling, emotion recognition in workplaces/schools, untargeted facial recognition scraping, and real-time remote biometric identification in public spaces. Document the screening results. If any system is flagged, cease operations immediately and engage legal counsel.

Step 4: AI Literacy Program

The EU AI Act requires all providers and deployers to ensure sufficient AI literacy among staff. This obligation has been active since February 2025.

Implement training covering: what AI systems your organization uses, the risks those systems present, the EU AI Act’s requirements relevant to each employee’s role, and how to exercise human oversight effectively. Track completion rates as a compliance KRI.

Step 5: Risk Management System (High-Risk Systems)

High-risk AI systems must operate within a documented risk management system that runs throughout the entire AI lifecycle.

This system must: identify and analyze known and foreseeable risks; estimate and evaluate risks that may emerge during intended use and reasonably foreseeable misuse; evaluate risks based on post-market monitoring data; and adopt suitable risk management measures.

This aligns directly with the ISO 31000 risk assessment process you already know: identify, analyze, evaluate, treat, monitor.

Step 6: Data Governance and Bias Management

Training, validation, and testing datasets must meet quality criteria. You must examine data collection processes, assess data gaps and shortcomings, establish data preparation protocols (annotation, labeling, cleaning, enrichment), and identify potential biases.

The Act explicitly requires that training datasets be representative and as free from errors as possible. Build bias testing into your CI/CD pipeline and track fairness metrics using your KRI dashboard.

Step 7: Technical Documentation and Conformity Assessment

Providers of high-risk AI systems must produce comprehensive technical documentation before placing the system on the EU market.

The documentation must cover: system description and intended purpose; design specifications and development methodology; data governance and training procedures; performance metrics and accuracy levels;

risk management measures; human oversight provisions; and cybersecurity specifications. Complete a conformity assessment (self-assessment or third-party, depending on the domain), issue an EU declaration of conformity, and affix the CE marking.

Step 8: Human Oversight Mechanisms

High-risk AI systems must be designed to allow effective human oversight. Individuals assigned oversight must: fully understand system capabilities and limitations; be able to correctly interpret outputs; be able to decide not to use the system or disregard its output; and be able to intervene or stop the system.

Document these mechanisms and train oversight personnel. Track oversight intervention rates as a KRI — zero interventions over extended periods may signal rubber-stamping rather than genuine oversight.

Step 9: EU Database Registration and Authorized Representative

Providers (and certain deployers) of high-risk AI systems must register in the EU database before placing the system on the market. US companies with no EU establishment must appoint an authorized representative in the EU.

This representative acts as the point of contact with supervisory authorities and can be subject to enforcement actions. Select a representative with AI regulatory expertise, not just a registered agent. Align this with your existing GDPR representative structure where possible.

Step 10: Post-Market Monitoring and Incident Reporting

Compliance does not end at deployment. Providers must establish and document a post-market monitoring system proportionate to the nature of the AI system and its risks.

Serious incidents must be reported to the relevant market surveillance authority. Deployers must retain system-generated logs at minimum six months.

Build continuous monitoring into your enterprise risk management framework with automated alerting when KRI thresholds breach.

General-Purpose AI (GPAI) Model Requirements

General-purpose AI models — think large language models and foundation models — carry their own obligation set that has been active since August 2025. The July 2025 GPAI Code of Practice provides the operational framework.

RequirementAll GPAI ModelsGPAI with Systemic Risk (Additional)
Technical DocumentationMaintain documentation covering model architecture, training procedures, and performance characteristics to the EU AI OfficeEnhanced documentation including model evaluation results and adversarial testing outcomes
Downstream Provider SupportFurnish technical information enabling developers building on the model to comply with their own AI Act obligationsSame, plus detailed risk profile information
Copyright ComplianceImplement policies respecting EU copyright law; identify any rights reservations under the Copyright DirectiveSame requirements
Systemic Risk AssessmentNot requiredPerform model evaluations including adversarial testing to identify and mitigate systemic risks
Incident ReportingNot requiredReport serious incidents to the EU AI Office without undue delay
CybersecurityNot requiredEnsure adequate cybersecurity protections proportionate to the model’s risk profile

US companies developing or deploying foundation models should evaluate the GPAI Code of Practice published in August 2025 and determine the compliance pathway.

The Code provides a voluntary compliance mechanism, but non-compliance with the underlying GPAI requirements carries fines of up to €15 million or 3% of global turnover.

EU AI Act vs. US AI Regulation: A Comparative Framework

US companies need to understand how the EU AI Act sits alongside existing US frameworks. The good news: significant overlap exists, meaning compliance investments serve multiple jurisdictions.

DimensionEU AI ActNIST AI RMF (US)US State Laws (NYC LL144, CO SB24-205, etc.)
Legal StatusBinding regulation with enforcement penaltiesVoluntary framework (but increasingly referenced by regulators)Binding laws with varying scope and enforcement mechanisms
ScopeAll AI systems affecting EU individuals; extraterritorialRecommended practice; no jurisdictional mandateVaries: employment AI (NYC), high-risk consumer decisions (CO), sector-specific
Risk ClassificationFour tiers: prohibited, high-risk, limited, minimalContext-dependent; no fixed tiersVaries by law; generally focus on high-impact decisions
Bias/FairnessMandatory data governance and bias testing on high-risk systemsRecommended: fairness testing, stakeholder engagement, bias monitoringNYC: mandatory annual bias audit; CO: disclosure and impact assessment
DocumentationMandatory technical documentation, conformity assessment, CE markingRecommended: model cards, datasheets, risk documentationVaries: NYC requires public audit summary; others vary
PenaltiesUp to €35M / 7% global turnoverNone (voluntary)Varies: NYC up to $1,500/violation/day; CO private right of action
Human OversightMandatory design requirements enabling effective human oversightRecommended principleGenerally required disclosure of AI use in decision-making

The strategic play: build your compliance program anchored to the EU AI Act (the strictest standard) and NIST AI RMF (the most structured voluntary framework).

This combination will satisfy or substantially address most current and emerging US state requirements.

Layer in sector-specific US obligations (EEOC, CFPB, FTC guidance) as needed. This is the same approach smart organizations used with GDPR — comply with the strictest regime, and everywhere else becomes easier.

Key Risk Indicators to Monitor EU AI Act Compliance

Compliance without monitoring is a snapshot, not a program. The following KRIs should be integrated into your existing compliance KRI dashboard to provide continuous visibility into your EU AI Act compliance posture.

KRIMeasurementThreshold (Example)Escalation Path
AI System Classification Coverage% of AI systems in inventory classified by risk tier< 100% = Amber; < 80% = RedAmber: Expedite reviews; Red: CRO briefing
Conformity Assessment Completion Rate% of high-risk systems with completed conformity assessment< 100% by Aug 2026 = RedRed: Immediate project escalation to GC + CTO
AI Literacy Training Completion% of relevant employees who completed AI literacy training< 90% = Amber; < 75% = RedAmber: HR escalation; Red: Board reporting
Bias Testing Coverage% of high-risk systems with current bias test results< 100% = Amber; < 80% = RedAmber: Data science sprint; Red: Deployment pause
Post-Market Monitoring Alert RateNumber of monitoring alerts per system per quarter> 10 = Amber; > 25 = RedAmber: Model review; Red: Revalidation
Serious Incident Reporting Timeliness% of serious incidents reported within required timeframe< 100% = RedRed: Immediate process review and GC notification
Third-Party AI Vendor Compliance Rate% of third-party AI vendors with documented EU AI Act compliance< 100% = Amber; < 70% = RedAmber: Vendor engagement; Red: Contract review
Human Oversight Intervention RateFrequency of human overrides on high-risk system decisions0% over 30 days = Amber (potential rubber-stamping)Amber: Oversight design review

These KRIs complement your broader regulatory compliance KRI framework. Report them monthly to the AI compliance owner and quarterly to the board alongside your financial risk indicators.

90-Day EU AI Act Compliance Roadmap

With the August 2026 deadline approximately five months away, the following roadmap gives you a structured sprint path from awareness to operational readiness.

PhaseTimelineKey ActivitiesDeliverables
Phase 1: Discovery and ClassificationDays 1–30Complete AI system inventory across all business units. Classify each system by risk tier. Determine provider/deployer role per system. Screen all systems against prohibited practices list. Identify EU touchpoints (users, data subjects, output recipients). Appoint AI compliance owner. Engage authorized representative in EU.AI System Inventory Register; Risk Classification Matrix; Role Determination Log; Prohibited Practices Screening Report; Authorized Representative Agreement
Phase 2: Gap Analysis and DocumentationDays 31–60Run gap analysis against Section 2 requirements on high-risk systems. Draft risk management system documentation. Begin technical documentation packages. Conduct initial bias testing. Deploy AI literacy training. Review and update vendor contracts with EU AI Act clauses. Build KRI dashboard framework.Gap Analysis Report (per high-risk system); Risk Management System (Draft); Technical Documentation (Draft); Bias Testing Baseline; Training Completion Records; Contract Amendment Templates; KRI Dashboard Wireframe
Phase 3: Operationalize and ValidateDays 61–90Complete conformity assessments on highest-risk systems. Finalize technical documentation. Register in EU database. Affix CE marking. Deploy post-market monitoring. Establish incident reporting workflow. Launch KRI dashboard with automated feeds. Conduct tabletop exercise testing incident response. Schedule independent audit.Conformity Assessment Reports; EU Declaration of Conformity; CE Marked Systems; EU Database Registrations; Post-Market Monitoring SOP; Incident Response Playbook; KRI Dashboard (Live); Tabletop Exercise Report; Audit Engagement Letter

This roadmap follows the same project risk assessment discipline you would apply to any major compliance initiative: clear scope, phased delivery, named owners, and measurable milestones. Track the 90-day plan as a formal project with weekly status reviews.

Common Pitfalls US Companies Make with EU AI Act Compliance

After advising on AI governance programs, these are the failure patterns that surface most frequently among US organizations approaching the EU AI Act.

  • Assuming US Headquarters Means EU Rules Do Not Apply: This is the number-one mistake. The Act’s jurisdictional trigger is system output reaching the EU, not corporate domicile. A SaaS platform with even one EU user generating AI-assisted outputs is in scope. Map your EU exposure before assuming exemption.
  • Treating GDPR Compliance as Sufficient: GDPR covers data protection. The EU AI Act covers system safety, transparency, bias, and accountability. Different regulatory objectives, different obligations, different documentation requirements. Your Data Protection Officer cannot own this alone. You need AI-specific governance expertise.
  • Waiting Until Guidance Finalizes: The GPAI Code of Practice published in July 2025 filled many remaining gaps. Core obligations have been clear since the Act entered into force. Companies waiting until every guideline is finalized will find themselves scrambling in mid-2026 with no time margin. Start now, refine as guidance evolves.
  • Confusing ‘Human in the Loop’ with Human Oversight: The Act requires that humans assigned oversight can genuinely understand, interpret, intervene, and override AI decisions. Having a human technically ‘in the loop’ who rubber-stamps outputs does not satisfy the requirement. Design meaningful oversight mechanisms and track intervention rates.
  • Ignoring Third-Party AI Risk: Many US companies deploy pre-built AI models, APIs, and vendor tools without conducting any due diligence on EU AI Act compliance. Roughly 20% of organizations using third-party AI tools do not assess those tools’ risks at all. Extend your third-party risk management to every AI vendor in your supply chain.
  • Documentation Without Substance: In an enforcement audit, ‘we do this’ without supporting evidence equals ‘we do not do this.’ Every compliance claim must be backed by documentation: risk assessments, test results, monitoring logs, training records, and incident reports. Build the evidence trail from day one.

Looking Ahead: What US Companies Should Prepare to Address

Enforcement Will Accelerate After August 2026

National competent authorities have been operational since 2025. Post-August 2026, expect enforcement activity to ramp rapidly, similar to the post-GDPR surge.

Early enforcement actions will likely target high-visibility sectors (employment AI, financial services, healthcare) where consumer impact is most tangible and political pressure is highest.

The Brussels Effect Will Spread

Just as GDPR became the de facto global privacy standard, the EU AI Act is already shaping regulatory thinking across the G7, OECD, and individual US states.

Over 65 nations have published national AI strategies, and most are adapting the EU’s risk-based approach rather than creating entirely new frameworks. Building to the EU standard now positions your organization advantageously as other jurisdictions align.

AI Agents and Autonomous Systems Will Require New Controls

The current Act was written primarily with predictive AI and generative AI in mind. Agentic AI systems — where models plan, execute multi-step tasks, and interact with other systems autonomously — introduce governance gaps the regulation has not fully addressed.

Expect supplementary guidance from the EU AI Office. Build flexibility into your compliance framework to accommodate evolving requirements.

The organizations that thrive will be those that treat EU AI Act compliance not as a one-time project but as an ongoing operational capability integrated into their enterprise risk management framework and continuously refined.

Take Action Today

Start with Step 1: inventory every AI system in your organization and classify by risk tier. The EU AI Act compliance checklist above gives you the complete action sequence.

The 90-day roadmap provides the timeline. The KRI framework provides the monitoring mechanism. August 2026 is five months away.

The companies that start now will have compliance locked in before enforcement actions begin. The companies that wait will be playing catch-up at premium cost.

Explore more practitioner frameworks across enterprise risk management, AI governance, and business continuity at riskpublishing.com. Subscribe to receive new articles, templates, and tools delivered to your inbox.

References

Internal Resources (riskpublishing.com):

External Authoritative Sources: