Patricia Alfheim
February 10, 2025

The EU AI Act: What enterprises need to know

The EU AI Act: What enterprises need to know

On February 2, 2025, the first provisions of the EU AI Act came into effect, marking a major shift in how artificial intelligence is governed across Europe. This is the world’s first comprehensive AI law, setting a precedent for how businesses must handle AI data, transparency, and risk management.

For enterprises leveraging AI, the message is clear: compliance is no longer optional. The AI Act introduces strict prohibitions, new governance expectations, and significant penalties for violations. Organizations must act now to assess their AI systems, adjust governance strategies, and prepare for what’s coming next.

What changed on February 2, 2025?

The AI Act follows a risk-based approach, categorizing AI systems based on their potential impact on people and society. While full compliance requirements will roll out over the next two years, certain AI applications are now outright banned:

  • Social Scoring Systems – Enterprises cannot use AI to evaluate individuals based on their behavior, socioeconomic status, or predicted trustworthiness.
  • Emotion Recognition in Sensitive Environments – AI systems that analyze emotions in workplaces, schools, and public services are now prohibited.
  • Manipulative or Deceptive AI – AI models that use subliminal techniques to manipulate behavior in harmful ways are banned.
  • Unregulated Biometric Identification in Public Spaces – Real-time facial recognition and biometric tracking are now restricted to specific law enforcement use cases, requiring judicial approval.

For organizations relying on these AI capabilities, compliance means immediately discontinuing non-compliant AI models and reassessing AI deployment strategies.Beyond these prohibitions, the AI Act also introduces AI literacy requirements, meaning businesses must ensure employees interacting with AI understand its risks, limitations, and compliance obligations.

What enterprises must do now

For organizations operating in Europe - or any business deploying AI that interacts with the EU market - proactive AI governance is now critical. Enterprises should focus on three key areas:

  1. Audit AI systems for compliance risks

Organizations must assess their AI systems to determine whether they fall under:

  • Banned AI (must be discontinued immediately)
  • High-Risk AI (will require strict governance by 2026)
  • General-Purpose AI (transparency requirements coming August 2025)

Key questions to ask:

  • Do we use AI in decision-making that could impact individuals' rights?
  • Can we prove where our AI data comes from?
  • Are we using any biometric, emotion recognition, or behavioral prediction models?
  1. Enhance AI data transparency and trust

A major theme of the AI Act is ensuring AI decisions are explainable, accountable, and auditable. Enterprises must document how their AI models are trained, what data is used, and what risks exist.This means:

  • Tracking data lineage – Knowing where your AI data comes from and how it’s used.
  • Implementing AI trust scoring – Ensuring AI models meet accuracy and fairness benchmarks.
  • Applying access controls – Managing who can use and modify AI systems and ensure that AI systems or agents are subject to access controls (i.e. which data sets, systems or applications it can access).
  1. Prepare for what’s coming in August 2025

The next major milestone in the AI Act is August 2, 2025, when new rules for General-Purpose AI (GPAI) models take effect. Enterprises using or developing large-scale AI models, language models, or generative AI will need to:

  • Disclose training data sources – Transparency will be required for models like LLMs, ensuring businesses can verify what data their AI is built on.
  • Implement risk mitigation measures – Enterprises must assess potential AI risks, including bias, misinformation, and misuse.
  • Apply model governance frameworks – AI models must be designed with safety and security controls.

Companies relying on AI for automation, decision-making, or customer interactions should start preparing now by implementing AI governance frameworks and compliance-ready workflows.

Lead with AI you can trust - and win

The EU AI Act is more than just a regulatory hurdle—it’s an opportunity for enterprises to build more transparent, accountable, and trustworthy AI systems. Trustworthy AI leads to better decision-making, faster actions, and accelerated growth—giving organizations a competitive edge as AI adoption grows.

Businesses that invest in AI governance, data provenance tracking, and risk mitigation now will also be ahead of the curve as compliance requirements become more stringent in 2026.

Learn how to to get started here

Keep updated

Don’t miss a beat from your favourite identity geeks