Tue, Feb 11, 2025

A Phased Approach: Thoughts on EU AI Act Readiness

The European Union’s (EU) AI Act (the Act) represents landmark artificial intelligence (AI) regulation from the EU designed to promote trustworthy AI by focusing on the impacts on people through required mitigation of potential risks to health, safety and fundamental rights. The Act introduces a comprehensive and often complex framework for the development, deployment and use of AI systems, impacting a wide range of businesses across the globe. It will be implemented in phases, the first of which is effective in February 2025.

EU AI Act Readiness

A Tiered System of Risk: Where Does Your AI Fit?

The AI Act categorizes AI systems based on their risk levels: unacceptable, high, limited and minimal risk. The most important first step you can take toward compliance is knowing your AI deployments and assessing how the Act categorizes their risk.

EU AI Act Readiness

Readiness Steps for Businesses Before February 2025

Start by building a comprehensive inventory of all AI systems deployed or in use by your company, including a brief description of what each does. This inventory allows you to take the following steps toward compliance:

  • End Prohibited AI Systems Development, Deployment and Use

If your use case and systems inventory reveals prohibited systems, you need to act immediately. Actions could include stopping all further deployment or use of the systems, assessing the system to understand what parts or outcomes are prohibited, and finding other potential mitigation steps.

  • Develop Risk-based AI Literacy and Training

The Act says very little about the format or substance of the training it requires. For instance, Article 4 is simply one sentence. It reads as follows: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.” 

The literacy  and training that your organization ultimately decides to deploy will be much more focused and defensible if it is built and communicated based on the actual AI systems in your inventory. This will allow you to provide meaningful business and regulatory context for your training, tailor it in meaningful ways for your users and employees, and create updates as circumstances and business needs require.

If You Stay Ready You Won’t Have to Get Ready – what’s next?

You’ve determined that you don’t have any prohibited AI systems, and you’ve created and deployed risk-based AI literacy and training for your company. Now what? Using your inventory as a starting place, develop a road map for compliance with the 2026 portions of the Act, and begin the conversations now about how your company will comply with Article 6(1), arguably the Act’s most complex part. Between now and August 2, 2027, organizations should consider how they will ensure that their AI inventory remains current and they have a good understanding of what constitutes a high-risk AI system.

A well-run AI program has an intake, analysis and approval process for new AI systems or use cases, with complete documentation for any high-risk systems. Take steps now to design that program and process so compliance becomes second nature and creates as little disruption as possible.

This preparation should include a plan for all high-risk AI systems to undergo necessary conformity assessments. Depending on the particular high-risk AI system, this conformity assessment may be done internally; under the Act, a notified body must perform some conformity assessments. Maintaining detailed documentation and implementing robust AI governance practices are also required steps in achieving compliance.

Getting Started with Your AI Program

The EU AI Act marks a significant step toward ensuring that AI technologies are developed and deployed responsibly. With a clear understanding of your business, including its use of AI, the AI risks as defined by the Act, as well as your plan to comply, you can not only ease the compliance burden, but also build trust with customers and stakeholders. Start now and plan for the process over the coming years.

With certified AI governance professionals and decades of experience in helping organizations implement security and risk management programs, Kroll can help design and implement compliant, trustworthy, resilient and secure AI governance programs and systems. Organizations at any stage of their AI adoption maturity can benefit from our AI risk management services, from assessing your compliance with AI laws and regulations or helping build a strategy, to validating AI models, testing for security weaknesses or providing AI risk monitoring services. 


Cyber Risk Assessments

Kroll's cyber risk assessments deliver actionable recommendations to improve security, using industry best practices & the best technology available.

Artificial Intelligence (AI) Insights

The latest thinking from Kroll experts on generative AI strategies, implementation, compliance, cyber and privacy threats and regulatory developments.

Kroll Expands AI Risk Consulting Services

Kroll, the leading independent global risk and financial advisory solutions firm, today announced its expanded AI Risk Consulting practice, headed by Dan Rice. Read More.


AI Security Testing Services

AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML.