AI Act: What Businesses Need to Know in 2026
AI Act: What Businesses Need to Know in 2026
Artificial intelligence is no longer beyond the reach of regulation. On August 1, 2024, the European regulation on artificial intelligence — the AI Act (Regulation EU 2024/1689) — entered into force. It is the world’s first legal framework specifically dedicated to AI.
For businesses, the question is no longer whether AI will be regulated, but how to comply before the deadlines. And the next one is imminent: on August 2, 2026, the bulk of obligations will become fully applicable.
According to a study by the Center for Data Innovation published in late 2025, fewer than 30% of European SMEs have begun preparing. This is a problem.
What Is the AI Act?
The AI Act is a European regulation — not a directive. It applies directly in all member states, without national transposition. Its objective: to regulate the development and use of AI in Europe through a risk-based approach.
Unlike the GDPR, which governs personal data, the AI Act governs AI systems themselves — their design, market placement, and use.
The two texts are complementary: an AI system that processes personal data must comply with both the GDPR AND the AI Act.
Why this regulation now? The explosion of generative models (ChatGPT, Mistral, Claude, Gemini) has accelerated awareness of the risks: algorithmic bias in recruitment, disinformation through deepfakes, mass surveillance via facial recognition, opaque automated decisions affecting fundamental rights. Europe has chosen to regulate before these risks become uncontrollable.
The Implementation Timeline
The AI Act applies progressively between February 2025 and August 2027:
February 2, 2025 — Prohibited Practices
Since this date, AI systems presenting an unacceptable risk are strictly prohibited in the European Union. This includes:
- Subliminal or deceptive manipulation
- Exploitation of vulnerabilities related to age, disability, or social situation
- Social scoring by public authorities
- Biometric categorization based on sensitive data (race, religion, sexual orientation)
- Untargeted scraping of facial images for facial recognition
Concretely for businesses: if you use an AI tool that analyzes the emotions of your candidates during video interviews, or that classifies your employees based on a reliability score, you are potentially in violation since February 2025. Fines for prohibited practices are the heaviest: up to 35 million euros or 7% of global turnover.
August 2, 2025 — Obligations for General-Purpose AI Models (GPAI)
Providers of general-purpose AI models (Mistral, GPT, Claude, Llama, etc.) are subject to transparency and technical documentation obligations. Models presenting systemic risk face enhanced obligations.
What this means for users of these models: if you integrate a GPAI model into your products or services, you benefit from the technical documentation provided by the model provider. However, you remain responsible for the use you make of it and for the compliance of your final application.
August 2, 2026 — Full Application of the Text
This is the key date. From August 2, 2026, all obligations apply, except for the high-risk system classification rules of Article 6. All providers and deployers of high-risk AI systems listed in Annex III must be in compliance.
August 2, 2027 — Complete Application
Entry into force of Article 6 classification rules for high-risk systems already governed by existing European sectoral legislation (medical devices, toys, machinery, etc.).
Risk Classification: 4 Levels
The AI Act is based on a hierarchical risk approach. Each AI system is classified into one of four categories, which determine the applicable obligations.
Unacceptable Risk — Prohibited
Systems strictly prohibited since February 2025 (see above). No derogation possible.
High Risk — Heavy Obligations
High-risk AI systems are those with a significant impact on fundamental rights. Annex III of the regulation provides the list:
- Biometrics: identification, categorization, emotion detection
- Critical infrastructure: road traffic management, water, gas, electricity supply
- Education and training: evaluation, admission, orientation
- Employment: recruitment, candidate assessment, promotion decisions
- Essential services: eligibility for social benefits, credit scoring, insurance risk assessment
- Law enforcement: risk assessment, polygraphs, profiling
- Migration and border control: assessment of visa or asylum applications
- Justice: legal research, interpretation of facts and law
High-risk system obligations:
- Quality management system
- Detailed technical documentation
- Automatic logging
- Transparency and user information
- Effective human oversight
- Accuracy, robustness, and cybersecurity
- Registration in the European database
An essential point for SMEs: even if you are not the developer of the AI system, simply deploying it in a high-risk use case subjects you to deployer obligations. Using an automated CV screening tool, for example, makes you a deployer of a high-risk AI system in the employment domain.
Limited Risk — Transparency Obligations
Systems presenting limited risk are primarily subject to information obligations:
- Chatbots must inform the user that they are interacting with an AI
- AI-generated content (text, image, audio, video) must be flagged as such
- Emotion detection or biometric categorization systems must inform the persons concerned
Deepfakes are particularly targeted. Any synthetic content (image, audio, video) generated or modified by AI must be marked as such in a manner detectable by machines and understandable by humans. This obligation applies to model providers as well as users who distribute such content.
Minimal Risk — No Specific Obligation
The vast majority of current AI systems (spam filters, product recommendations, spell checkers) fall under minimal risk. No specific obligation applies, but adherence to best practices is encouraged.
Concrete Impact for French SMEs
Who Is Affected?
If your business uses a high-risk AI system (automated recruitment, credit scoring, risk assessment), you are a “deployer” under the AI Act and have specific obligations.
If your business develops an AI system, you are a “provider” and the obligations are heavier.
The provider/deployer distinction is fundamental. A provider designs and places an AI system on the market. A deployer uses it in the course of its business. Both have obligations, but they are different. The deployer must ensure that the AI is used in accordance with the provider’s instructions, implement human oversight, inform the persons concerned, and carry out a fundamental rights impact assessment for high-risk systems.
What SMEs Need to Do Now
- Map the AI systems in use: what AI tools are deployed in your organization? For what purposes? This inventory is the essential first step.
- Classify the risks: for each system, determine its risk category under the regulation. Consult Annex III to check whether your use cases are classified as high-risk.
- Assess compliance: do high-risk systems meet the requirements for transparency, traceability, and human oversight?
- Document: even for minimal-risk systems, document the uses and precautions taken. This documentation will be invaluable in the event of an inspection.
- Train your teams: raise awareness among employees about AI Act obligations and responsible AI use best practices.
- Contact your suppliers: request technical documentation and compliance certificates from the providers of the AI systems you use.
Exemptions and Support Measures
The regulation includes provisions to lighten the burden on SMEs:
- Systems already legally on the market before August 2026 benefit from a transitional clause — they can remain in service as long as they do not undergo “substantial modification”
- Regulatory sandboxes allow testing of innovative AI systems in a supervised framework
- The European Commission must publish guidelines and practical tools to support SMEs
- Fines are reduced for SMEs: ceilings are calculated proportionally to turnover, with reduced amounts for micro-enterprises and startups
AI Act and GDPR: Two Complementary Regulations
The AI Act does not replace the GDPR — it adds to it. If your AI system processes personal data, you must comply with both texts simultaneously.
Examples of overlap:
- An automated recruitment system must comply with the AI Act (high risk) AND the GDPR (processing of personal data, profiling, automated decisions under Article 22)
- A chatbot that collects personal data must inform the user that they are interacting with an AI (AI Act) AND comply with the GDPR’s transparency and consent obligations
- The use of personal data to train an AI model must comply with the GDPR’s minimization and purpose limitation principles
The right to explanation is strengthened. Article 22 of the GDPR already gives the right not to be subject to a solely automated decision producing legal effects. The AI Act reinforces this right by requiring effective human oversight for all high-risk systems. In practice, an AI system must never make a decision with significant impact on a person without the possibility of human intervention.
Training data governance is another point of convergence. The AI Act requires that data used to train high-risk systems be relevant, representative, error-free, and complete. The GDPR requires that the processing of personal data for training purposes be based on a valid legal basis. Both requirements are cumulative.
How DPLIANCE Integrates AI Responsibly
At DPLIANCE, we use AI in our products — notably Mistral, a French and European AI model, integrated into Complio to automate GDPR compliance auditing.
Our approach is guided by three principles:
- Transparency: we clearly indicate when AI intervenes in our products and what it does
- Sovereignty: we prioritize European models (Mistral) and European hosting (Scaleway) so that data never leaves European soil
- Human oversight: AI in Complio assists and accelerates the audit, but does not replace human expertise. Recommendations are always verifiable
Privacy is not a compromise. Neither is sovereignty. And artificial intelligence should not change this equation.
Discover how Complio uses AI to automate your GDPR compliance, and explore our full range of sovereign solutions.
FAQ
Does the AI Act apply to businesses that use AI without developing it?
Yes. The AI Act distinguishes between “providers” (who develop) and “deployers” (who use). Deployers of high-risk systems have specific obligations: human oversight, informing persons concerned, use in accordance with the provider’s instructions. For limited-risk systems (chatbots, content generators), the main obligation is transparency toward the user.
Is my chatbot covered by the AI Act?
Yes, at minimum under transparency obligations (limited risk): you must inform users that they are interacting with an AI. If the chatbot makes decisions affecting people’s rights (e.g., eligibility for a service), it could be classified as high-risk. The analysis depends on the actual use, not the technology itself.
What penalties does the AI Act provide?
Fines vary by severity: up to 35 million euros or 7% of global turnover for prohibited practices, up to 15 million or 3% for other obligations. Reduced amounts are provided for SMEs. The regulation also provides that national supervisory authorities (in France, likely CNIL (French data protection authority)) may impose corrective measures, market withdrawals, and temporary use bans.
What is the difference between the AI Act and the GDPR?
The GDPR governs the processing of personal data. The AI Act governs artificial intelligence systems themselves — their design, market placement, and use. The two texts are complementary and can apply simultaneously to the same system. In case of conflict, the rules most protective of individuals prevail.
Should I stop using AI in my business?
No. The vast majority of AI uses in business (assistants, automation, data analysis) fall under minimal risk and face no restrictions. The AI Act targets high-risk uses, not AI in general. The regulation’s objective is not to slow innovation, but to ensure that AI is used responsibly and in respect of fundamental rights.
How can I concretely prepare before August 2026?
Start with an inventory of all AI systems used in your business. Classify them according to the regulation’s risk categories. For high-risk systems, contact your suppliers for technical documentation. Train your teams. And document everything: in the event of an inspection, proof of your compliance efforts will be your best asset.
Sources: Regulation (EU) 2024/1689 — AI Act, Service-public.fr — AI Act: what changes for businesses, CNIL (French data protection authority) — Artificial Intelligence, Naaia — AI Act 2026 Timeline. Article updated January 28, 2026.