EU AI Act (2025) - Key Takeaways

The key takeaways from the EU AI Act (2025) - the world’s first comprehensive regulatory framework for artificial intelligence:

Purpose and Scope

  • Goal: To ensure AI in the EU is safe, transparent, and respects fundamental rights, while fostering innovation.
  • Applies to: All AI systems that impact people in the EU, regardless of where they are developed or deployed

Risk-based Classification

AI systems are categorised into four risk levels:

  • Unacceptable Risk – Banned outright (e.g., social scoring, real-time facial recognition in public)
  • High Risk – Subject to strict obligations (e.g., in healthcare, finance, education, law enforcement)
  • Limited Risk – Must provide transparency (e.g., chatbots must disclose they are AI)
  • Minimal Risk – No restrictions (e.g., spam filters, video game AI)

Key Requirements

  • Transparency & Accountability: High-risk AI must be explainable, auditable, and free from bias.
  • Human Oversight: Critical decisions must involve human judgment.
  • Data Governance: High-quality, representative data must be used to train AI systems.
  • Documentation & Monitoring: Developers must maintain logs, undergo audits, and meet EU reporting standards

Compliance Timeline

  • In force: From 1 August 2024.
  • Phased rollout: Starting with bans on prohibited AI in February 2025, with full implementation by 2030

Penalties

  • Non-compliance can result in fines up to €35 million or 7% of global annual revenue, whichever is higher

Strategic Goals

  • Reduce EU dependence on non-European AI technologies.
  • Set a global benchmark for responsible AI, similar to how GDPR shaped global data privacy norms