Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Industry Trends

The Clock is Ticking: EU AI Act's August 2nd Deadline is Almost Here

Published on
July 8, 2025
4 min read

The European Union's ambitious AI Act has been making headlines for months, but now the rubber meets the road. On August 2nd, 2025, the first major wave of compliance requirements takes effect, marking a pivotal moment for AI companies operating in or serving the European market.

While the February 2025 prohibition on high-risk AI systems grabbed early attention, this summer's deadline is where the real work begins. It's the moment when the EU's regulatory framework transforms from theory to practice, establishing the infrastructure that will govern AI development for years to come.

Think of August 2nd as the day the EU's AI governance machinery officially powers up. From notified bodies getting their certification authority to AI model providers documenting their training data, this deadline establishes the foundational systems that will shape how artificial intelligence is regulated across the continent.

For many AI companies, especially those developing general-purpose models, this isn't just another compliance checkbox, it's a fundamental shift in how they'll need to operate. The question isn't whether your organization will be affected, but how prepared you are for what's coming.

What’s required for the Aug 2nd 2025 Deadline

Provision (chapter / article) Who must comply What has to be in place by 2 Aug 2025
Notified bodies (Chapter III § 4) Conformity-assessment bodies that want to certify high-risk AI systems
  • Be designated by a Member-State and hold legal personality.
  • Meet independence, quality-management, staffing and cyber-security criteria.
  • Carry liability cover and strict confidentiality controls.
GPAI models (Chapter V — Art 53) Every provider of a general-purpose model placed on the EU market (open-source exempt unless later tagged systemic-risk)
  • Maintain an Annex XI technical dossier (architecture, training data provenance, eval results).
  • Hand a “transparency package” to downstream integrators (Annex XII).
  • Publish a training-data source summary and a copyright-compliance policy.
  • Appoint an EU representative if established abroad.
GPAI with systemic risk (Art 55) GPAI models that used ≥10²⁵ FLOPs or separately designated by the Commission
  • Run state-of-the-art model & adversarial testing.
  • File a Union-level systemic-risk mitigation plan.
  • Keep a 24 h serious-incident log & reporting channel.
  • Shore up model & infrastructure cyber-security.
Governance layer (Chapter VII — Arts 64-70) EU institutions & Member States
  • AI Office inside the Commission goes live to oversee GPAI and issue guidance.
  • European AI Board (one delegate / country + EDPS observer) convenes; two standing sub-groups for market surveillance & notified-body coordination.
  • Every Member State names its national competent authority & single contact point.
Confidentiality rule (Art 78) All authorities, notified bodies & anyone handling compliance data
  • May request only data strictly necessary for risk checks.
  • Must protect trade secrets, IP and source code; deploy adequate cyber-security; delete data when no longer needed.
  • No onward disclosure without prior consultation.
Penalty framework (Chapter XII — Arts 99-100)
  • Art 99: private-sector operators
  • Art 100: EU institutions & agencies
Art 99:
  • Up to €35 m / 7% turnover for banned practices (Art 5).
  • Up to €15 m / 3% for other breaches (e.g., GPAI duties).
  • Up to €7.5 m / 1% for false info.
Art 100:
  • EU bodies face fines up to €1.5 m (prohibited practices) or €0.75 m (other infringements).

CBRN and Malware Threats

If the model crosses the 10²⁵ FLOP training bar (or the Commission designates it systemic-risk), Article 55 obliges the model provider to identify, evaluate and actively mitigate systemic risks. Recital 110 lists examples of “systemic risks” that GPAI providers must look for: CBRN misuse, offensive-cyber capabilities, self-replicating models, large-scale disinformation, etc. Therefore, CBRN and offensive-cyber capabilities (malware) should be actively identified, evaluated and mitigated.

This is as of now only applicable to GPAI with systemic risk model providers, not enterprises that might use the models. Enterprises that only use GPAI see no fresh AI-Act paperwork until the high-risk wave in 2026.

This is where comprehensive red teaming becomes essential. Meeting the EU AI Act's systemic risk assessment requirements demands thorough testing across multiple threat vectors – from chemical and biological weapons knowledge to offensive cybersecurity capabilities. Enkrypt AI's comprehensive red teaming suite provides the specialized testing infrastructure needed to identify these risks systematically, helping GPAI providers build the robust evaluation protocols required for Article 55 compliance.

About Enkrypt AI

Enkrypt AI helps companies build and deploy generative AI securely and responsibly. Our platform automatically detects, removes, and monitors risks like hallucinations, privacy leaks, and misuse across every stage of AI development. With tools like industry-specific red teaming, real-time guardrails, and continuous monitoring, Enkrypt AI makes it easier for businesses to adopt AI without worrying about compliance or safety issues. Backed by global standards like OWASP, NIST, and MITRE, we’re trusted by teams in finance, healthcare, tech, and insurance. Simply put, Enkrypt AI gives you the confidence to scale AI safely and stay in control.

Meet the Writer
Nitin Birur
Latest posts

More articles

Industry Trends

Red Team Base and Instruct Models: Two Faces of the Same Threat

Discover why red teaming both base and instruct-tuned AI models is essential. Learn how threat surfaces change, how fine-tuning affects safety, and what enterprises must do to secure LLMs against jailbreaks and vulnerabilities.
Read post
Industry Trends

America’s AI Action Plan: Racing to Stay Ahead

Explore America’s AI Action Plan—2025’s most comprehensive federal strategy to accelerate AI innovation, advance infrastructure, and lead in global AI security. Learn how U.S. companies can leverage tools and platforms to thrive in a fast-evolving AI landscape.
Read post
Product Updates

A Partnership for Responsible AI: Truefoundry and Enkrypt AI

Ensure safe, compliant AI adoption. TrueFoundry and Enkrypt AI deliver unified governance, security, and compliance for generative AI in healthcare and beyond.
Read post