The EU AI Act is now a live compliance program, not a future policy debate. The law entered into force in 2024 and applies in phases through 2027. If your company provides, deploys, imports, distributes, or uses AI systems that affect people in the EU, you need an inventory, a risk classification process, and documentation that matches the system’s risk level.
The practical point: do not start with a policy memo. Start with a list of AI systems in use, classify each one, identify the owner, and close the documentation gaps before enforcement catches up.
This guide is business-oriented and current as of April 27, 2026. It is not legal advice; use counsel for final classification decisions, especially for high-risk or cross-border systems.
Key Deadlines
Based on the European Commission’s AI Act implementation timeline:
| Date | What applies |
|---|---|
| February 2, 2025 | General provisions, AI literacy duties, and prohibited AI practices apply |
| August 2, 2025 | General-purpose AI model rules and EU/national governance provisions apply |
| August 2, 2026 | Most AI Act rules apply, including Annex III high-risk systems and Article 50 transparency rules |
| August 2, 2027 | High-risk rules apply to AI embedded in regulated products |
The most important operational deadline for many businesses is August 2, 2026, because that is when enforcement begins for most obligations.
Risk Classes
Prohibited AI
These are uses the Act treats as unacceptable risk. Examples include manipulative or exploitative AI practices, certain social scoring, and some biometric identification uses. If a system falls into a prohibited category, the compliance answer is not “document it better.” The system needs to stop, be redesigned, or be removed from the EU market.
High-Risk AI
High-risk AI systems carry the heaviest obligations. Common business-relevant areas include employment, worker management, education, access to essential private or public services, creditworthiness, insurance risk assessment in certain contexts, law enforcement, migration, critical infrastructure, and AI used as safety components in regulated products.
High-risk systems require risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessment, post-market monitoring, and serious-incident reporting.
Limited-Risk AI
Limited-risk systems mainly have transparency duties. Chatbots should disclose that users are interacting with AI. Synthetic content may need labeling or other disclosure depending on the use case. Emotion recognition and biometric categorization also trigger specific transparency rules.
Minimal-Risk AI
Most routine AI uses, such as spam filtering or low-impact productivity support, have lighter obligations. Still, companies should keep internal rules because minimal-risk systems can become higher risk when moved into HR, finance, health, education, or public-service workflows.
Classification Workflow
Use this process for every AI system:
- Define the system and intended purpose.
- Identify who provides it, who deploys it, and who is affected.
- Check whether it falls into a prohibited category.
- Check whether it is a high-risk system under Annex III or a safety component of a regulated product.
- Check whether Article 50 transparency duties apply.
- Document the classification rationale.
- Assign an owner and review date.
If a vendor says “we are compliant,” still classify your use. The same tool can be low-risk in one workflow and high-risk in another.
Compliance Checklist
For every AI system:
- Maintain an AI inventory with owner, vendor, purpose, users, affected people, data sources, and risk class.
- Confirm whether personal data is processed and whether GDPR obligations apply.
- Document user-facing disclosures where required.
- Train staff on permitted and prohibited uses.
- Track vendor model changes and product updates.
For high-risk AI systems:
- Build a documented risk management process.
- Document training, validation, and test data where you control the system.
- Evaluate bias, accuracy, robustness, and cybersecurity.
- Implement logging and audit trails.
- Provide instructions for use and known limitations.
- Define human oversight roles and escalation paths.
- Prepare technical documentation and conformity assessment evidence.
- Monitor performance after deployment.
- Create serious-incident reporting procedures.
For general-purpose AI model use:
- Check whether your provider publishes required model information.
- Review terms for data use, retention, and security.
- Avoid sending sensitive or regulated data unless contract and controls support it.
- Keep prompts, outputs, and human review logs for high-impact workflows.
Penalties
The AI Act uses tiered administrative fines. The highest tier is up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher, for prohibited-practice violations. Other major operator obligations can reach up to EUR 15 million or 3% of worldwide annual turnover. Supplying incorrect, incomplete, or misleading information to authorities can reach up to EUR 7.5 million or 1% of worldwide annual turnover.
The exact fine depends on the facts, the organization, and enforcement discretion, but the message is clear: undocumented high-risk AI is not a casual operational issue.
AI Inventory Template
Track these fields:
| Field | What to record |
|---|---|
| System name | Product, vendor, internal owner, version |
| Intended purpose | What decision, recommendation, or output it supports |
| Users | Employees, customers, contractors, public users |
| Affected people | Who experiences the impact |
| Data | Inputs, personal data, sensitive data, retention |
| Risk class | Prohibited, high, limited, minimal |
| Rationale | Why that class was assigned |
| Controls | Human review, access control, logging, monitoring |
| Documentation | Policies, testing, vendor docs, impact assessment |
| Review date | Next scheduled reassessment |
Practical 30-Day Plan
Week 1: Identify all AI tools in use, including shadow AI used by teams outside IT.
Week 2: Classify each system and flag obvious high-risk workflows such as hiring, performance management, credit, insurance, education, health, or legal decision support.
Week 3: Collect vendor documentation, data-processing terms, model-change policies, and security materials.
Week 4: Close urgent gaps: disclosures, staff rules, human oversight, logging, and escalation procedures.
After 30 days, prioritize high-risk systems for deeper conformity assessment and post-market monitoring.
FAQ
Does the AI Act apply outside the EU?
Yes, it can. If an AI system is placed on the EU market, put into service in the EU, or produces outputs used in the EU, non-EU providers and deployers may still have obligations.
Is every chatbot high-risk?
No. Many chatbots are limited-risk systems with transparency obligations. A chatbot used for routine customer support is different from one that determines access to benefits, credit, education, or employment.
Do small businesses have to comply?
Yes. The obligations depend on the system and role, not only company size. Enforcement may consider proportionality, but small businesses still need classification, documentation, and controls.
What should I do first?
Build the inventory. You cannot classify, document, or monitor systems you have not found.
Verified Sources
- European Commission AI Act Service Desk, “Timeline for the Implementation of the EU AI Act,” accessed April 27, 2026: https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
- European Commission AI Act Service Desk, “FAQ,” accessed April 27, 2026: https://ai-act-service-desk.ec.europa.eu/en/faq
- NIST, “AI Risk Management Framework,” accessed April 27, 2026: https://www.nist.gov/itl/ai-risk-management-framework
- ISO, “ISO/IEC 42001 Artificial intelligence management system,” accessed April 27, 2026: https://www.iso.org/standard/42001