EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI law and applies to all companies that develop, deploy, or market AI systems in the EU. The law classifies AI systems by risk level and defines corresponding transparency, documentation, and safety obligations. For web developers and SMEs using AI Agent s or LLM -based features, the EU AI Act becomes incrementally relevant from 2025 onward.

Risk classes and obligations in the EU AI Act

The EU AI Act divides AI systems into four risk classes: Unacceptable Risk (prohibited: social scoring, real-time biometric surveillance in public spaces), High Risk (strict requirements: HR software, credit decisions, biometric systems), Limited Risk (transparency obligations: chatbots must identify themselves as AI), and Minimal Risk (no obligations: spam filters, AI games). Most SME-relevant applications — AI chatbots, recommendation systems, text generators — fall into the Limited Risk class with transparency obligations.

Timeline and SME accommodations

The EU AI Act applies gradually: prohibited practices from February 2025, high-risk system requirements from August 2026, full application from August 2027. SMEs benefit from simplified documentation requirements and sandbox provisions. Relevant for everyone: AI-generated content (texts, images) must be labeled as such. This adds an AI dimension to existing GDPR Compliant requirements.

How we address EU AI Act compliance at BTECH

BTECH Solutions actively monitors EU AI Act developments. For all AI-based features in client projects — chatbots, automated analysis, AI Agent integrations — transparency is the standard: AI-generated content is clearly labeled, and user data is processed according to Data Minimization principles. EU AI Act compliance is evaluated as a selection criterion for new project technologies.