High-Level Summary of the EU AI Act
The AI Act classifies AI systems based on risk levels. It regulates high-risk AI, requires transparency for limited-risk AI, and prohibits unacceptable-risk AI. Most obligations fall on providers of high-risk AI systems.
Updates EUR-Lex
Overview
The EU AI Act aims to regulate artificial intelligence systems in the EU. Key aspects include:
Prohibiting certain AI practices deemed unacceptable
Requirements for high-risk AI systems
Transparency rules for certain AI systems
Rules for general-purpose AI models
Governance framework and enforcement measures
AI Risk Classification
1
2
3
4
1
Unacceptable Risk
Prohibited AI systems (e.g. social scoring, manipulative AI)
2
High Risk
Regulated AI systems (e.g. critical infrastructure, education)
3
Limited Risk
Transparency obligations (e.g. chatbots, deepfakes)
4
Minimal Risk
Unregulated AI (e.g. spam filters, video games)
Prohibited AI Systems
1
Manipulative AI
Systems using subliminal techniques to distort behavior
2
Exploitative AI
Systems exploiting vulnerabilities of specific groups
3
Social Scoring
Evaluating individuals based on social behavior or traits
4
Emotion Recognition
In workplaces or educational institutions (with exceptions)
High-Risk AI Systems
Classification
AI systems used as safety components or in specific use cases listed in Annex III of the AI Act.
Requirements
Providers must establish risk management systems, ensure data governance, and provide technical documentation.
Use Cases
Include biometrics, critical infrastructure, education, employment, and law enforcement.
General Purpose AI (GPAI)
1
1
Technical Documentation
Including training and testing processes
2
2
Information for Downstream Providers
To enable compliance
3
3
Copyright Compliance
Respect for Copyright Directive
4
4
Training Data Summary
Publish details about content used
GPAI providers must meet these requirements. Additional obligations apply to systemic risk models.
Governance and Implementation
1
AI Office Establishment
To monitor GPAI model compliance
2
Complaint Mechanism
For downstream providers against upstream infringements
3
GPAI Model Evaluations
To assess compliance and investigate systemic risks
Implementation Timeline
1
2025: Initial Application
Prohibitions on certain AI systems begin. Requirements for AI literacy start
2
2026: AI Act Full Application
The remainder of the AI Act applies. AI regulatory sandboxes are established.
3
2027: GPAI Compliance
Providers of GPAI models must comply with obligations.
4
2028: Evaluation
Commission evaluates the AI Office and voluntary codes of conduct.
5
2029: Expiration
The Commission's power to adopt delegated acts expires unless extended.
6
2030: Full Compliance
Providers and deployers of high-risk AI systems must be compliant.
7
2031: Enforcement Assessment
Commission assesses the enforcement of the AI Act.
Implementation Timeline: 2025
1
February 2, 2025
Enforcement of AI system bans and literacy mandates as per Article 113(a) and Recital 179.
2
May 2, 2025
Codes of practice should be ready. They will guide AI behavior.
3
August 2, 2025
GPAI model rules apply, focusing on governance, penalties, and confidentiality.
4
August 2, 2025
Member states must report on financial and human resources for national authorities.
EU AI Act Compliance Guide for AI-Driven CX & Chatbots
Step 1: Check your Risk Level
Identify your AI system's risk level. The EU AI Act has 4 risk categories.
Train customer service teams → Ensure they understand AI implementation and limitation
Step 2: Compliance
Key compliance actions for CX AI. Ensure transparency in AI interactions.
Clearly disclose when customers interact with an AI. Example: “I am an AI assistant. How can I help you today?”
AI-generated content (e.g., responses, recommendations) must be labeled as AI-generated
Avoid misleading customers into thinking they are talking to a human
Allow human escalation for sensitive issues (e.g., complaints, contract cancellations).
If AI makes decisions with significant impact, provide a human review option
Step 3: Ethical AI & Fair Treatment
AI should not manipulate customers (e.g., deceptive upselling, false urgency)
If AI personalizes offers or pricing, ensure it is fair and explainable
Avoid using emotion recognition or biometric profiling, which could fall under high-risk AI
Step 4: Data Protection, GDPR
AI chatbots must follow GDPR
Secure customer data & provide clear opt-out options Store only necessary customer interactions
If AI processes personal data, ensure consent is collected
Give customers the right to access & delete AI-stored data
Monitor & Audit AI Performance
Keep records of chatbot interactions & decisions
Document how AI responses are generated & monitore
Regularly audit for bias, errors, and fairness
Train customer service teams → Ensure they understand AI limitations
Penalties
€35M
Most Serious Violations
€35M or 7%
of global annual turnover. Maximum fine for prohibited AI practices like social scoring and manipulation
€15M
High-Risk AI Non-compliance
€15M or 3%
of global annual turnover. For failing to implement required risk management and oversight
€7.5M
Misleading Information
€7.5 million or 1.5%
of global annual turnover. For providing incorrect or incomplete information to regulators
Note: Fines are capped at lower amounts for SMEs and startups to avoid disproportionate impact.
Enforcement
National authorities and the AI Office (under the European Commission) will oversee compliance
The fines aim to ensure that AI systems, especially high-risk AI, are safe, transparent, and accountable
Key Takeaways
Risk-Based Approach
AI systems classified and regulated based on risk levels
Provider Obligations
Most responsibilities fall on high-risk AI system providers
GPAI Regulation
Specific requirements for general purpose AI models
Customer Experience
Check your AI Chatbots and other AI for compliance
Visualize the key points
Training is mandatory for all employees, including those using chatGPT solely for text or research.