
Artificial Intelligence (AI) Act
Regulation (EU) 2024/1381 of the European Parliament and of the Council of 13 March 2024 (AI Act) establishes harmonised rules for the safe and responsible use of AI in the EU. The rules are in force and will apply in phases, with most obligations for "high-risk" systems taking effect in 2026.
The AI Act is risk-based: the higher the risk to citizens or the economy, the more detailed the obligations for developers, importers and users of AI systems.
Given the coming regulation and rising market expectations, now is the time to elevate AI governance. Increasingly it's not only a legal obligation under the AI Act, but also a required element of corporate governance and evidence of best practice in business relationships – even beyond formal duties.
AI Act – comprehensive support from Cybernite
Risk categories under the AI Act
- Prohibited practices – e.g. mass biometric identification in public spaces, subliminal manipulation, "social scoring"
- High-risk systems – e.g. biometrics in banking, recruitment algorithms, AI controlling critical infrastructure or medical diagnostics
- Limited-risk systems – e.g. customer service chatbots, image generators (transparency requirements such as informing users they're interacting with AI)
- Minimal risk – e.g. anti-spam filters, where no legal duties apply and only good practice is recommended
Who falls under the AI Act?
- Developers (producers) of AI models and applications
- Importers/distributors placing AI systems on the EU market
- Professional users – banks, hospitals, public bodies, industrial firms operating AI models in their processes
- IT providers (integrators, cloud) where high-risk system runs under their control
A regulated client (e.g. a bank or government office) may classify your organisation as a "key supplier of high-risk AI" and require full compliance, even if formally you're not directly in scope of the AI Act.
Main obligations for high-risk systems
- AI risk management framework
- Technical documentation and model change log
- High-quality, clean training data
- Transparency and auditability (traceability)
- Human oversight of algorithmic decisions
- Cybersecurity testing of models
- Registration of systems in the EU high-risk database
Penalties for non-compliance with the AI Act
- Breach of prohibitions: up to € 35 m or 7% of global annual turnover
- Breach of requirements for high-risk systems: up to € 15 m or 3% of turnover
- Incorrect or incomplete information to the authority: up to € 7.5 m or 1% of turnover
Sanctions are intended to be "effective, proportionate and dissuasive", with national authorities (UODO and the Digitalisation Ministry in Poland) empowered to impose them.
How we help implement the AI Act
- We deliver end-to-end AI governance – covering risk audits, policies, procedures and model registers
- We audit ML models and processes, identifying compliance gaps
- We support the creation of technical documentation and conformity assessment reports for the EU register
- We design secure architectures for model deployment in cloud and on-premises
- We train boards, legal teams and data science units on practical AI Act requirements
- We develop security test scenarios for models (adversarial attacks, model theft, prompt hacking)
