Suggested region and language based on your location

    Your current region and language

    Women looking at tablet
    • Blog
      Digital Trust

    How to prepare for conformity assessments related to the EU AI Act

    A panel explains the EU AI Act’s process for assessing high-risk AI systems through conformity checks.

    With 2026 fast approaching, businesses that work with artificial intelligence (AI) and trade in Europe have just a few months to prepare for a major requirement of the EU AI Act.

    Set to take effect from August 2026 and August 2027, conformity assessments were the subject of a special panel session at ICAEW’s first-ever AI Assurance Conference, held earlier this year at Chartered Accountants’ Hall.

    In the talk – chaired by Deloitte Associate Director, Banking and Capital Markets, Roger Smith – experts from the legal profession, tech industry and certification field shed light on which types of technologies the assessments will apply to. The speakers also set out some background details on why the assessments are such an important part of the Act.

    Pyramid of risks

    Ranked by Legal 500 as a leading expert on AI law, Simmons & Simmons Managing Associate William Dunning explained that conformity assessments are designed to regulate AI systems – in other words, AI that is integrated with something else. That may mean a type of web browser, for example – or a piece of hardware, such as a robot arm.

    Dunning noted that conformity assessments form one pillar of the Act’s risk-based approach, which resembles a pyramid. At the top, the EU places prohibited AI systems, such as certain biometric tools that could be used to exploit people’s vulnerabilities. As those tools are deemed to pose unacceptable risks, they are essentially banned. Along the pyramid’s base are lower-risk systems, which are regulated comparatively lightly.

    Where conformity assessments (or self-assessment) apply is in the middle of the pyramid – comprised of systems the EU considers high risk. “There are two categories, here,” Dunning explained. “First, there are AI systems that play a safety role in types of risk-sensitive products, such as cars, medical devices or pieces of machinery. Second, there are systems used in particular social contexts, such as recruitment, HR or education.”

    In terms of the compliance practicalities that businesses face, Dunning noted: “There’s a wide range of both substantive and procedural requirements for high-risk systems. In the former, companies must demonstrate evidence of risk management, data governance, use of specific types of documentation, transparency, human oversight, cyber security, accuracy, robustness and quality management.

    “On the procedural side is where the conformity assessment comes in, evaluating systems in relation to those evidence categories. If a system passes, it receives a Conformité Européene (CE) kitemark.”

    Finally, Dunning noted, in parallel with those obligations, the system must be formally registered. “All told,” he said, “it’s quite a big bundle of requirements.”

    Gold standard

    Tim McGarr – AI Market Development Lead (Regulatory Services) at certification specialists BSI– explained that the conformity assessment’s evaluation procedure derives largely from product testing regulation, which is rooted in principles of impartiality. In that fashion, it feeds into one of the EU’s main ambitions behind the Act: to create a pioneering piece of legislation that will serve as a benchmark for other jurisdictions.

    “Just as GDPR provided a gold standard for how to legislate for data protection,” he said, “the Act sets out to do the same for AI. If we look at how the conformity assessment could help raise awareness of the need to apply best practices, it depends what sector you’re in. Manufacturers of products such as medical devices have been used to this level of scrutiny for many years, and simply understand that that’s how regulation works. However, people with more of a tech background are not used to being regulated in this way at all – so the Act represents new territory for them.”

    Pauline Norstrom – CEO of consultancy Anekanta AI – explored further reasons behind the breadth of the assessment categories. In particular, she highlighted the child welfare scandal that hit the Netherlands in 2018, when it emerged that authorities had used an algorithm to detect benefits fraud. Working from a set of risk indicators, the system had overwhelmingly targeted minority-ethnic families. As a result, tens of thousands of low-income parents and caregivers were wrongly accused of fraud – and more than 1,000 children wrongly sent into foster care. In 2022, the scandal was discussed in the European Parliament, with more details available on their website.

    “Look at Amnesty International’s report on the scandal, Xenophobic Machines,” Norstrom said. “It’s essentially a playbook on how to innovate without adequate evidence, then allow the resulting system – which wasn’t ChatGPT, just a regular, machine-learning algorithm – to carry on for years before its negative impacts were felt.”

    Dealing with complexity

    Turning to how companies should get in shape for the effective date of conformity assessments, Norstrom said: “Organisations should look at their AI setup – not just what’s in play, but what they’re planning to buy, too – and ask key questions about the capabilities of each system to gauge its risk level. For example, is it autonomous? Can it adapt? And can you pin down an explanation for the decisions it makes?”

    Dunning concurs. “Get an inventory of all the AI you actually have, because until you’ve done that, you can’t begin to scope it out and determine what’s regulated and how. What my firm is seeing with the largest organisations – some of which have hundreds of AI systems or models – is that they’re starting to put together that inventory. Once that’s in hand, you can examine your systems in the context of the Act and start to deal with that complexity.”

    However, he cautions: “When the Act’s rules around high-risk systems were developed, ChatGPT wasn’t around. So, this regulation aimed at specific types of systems for, say, sorting CVs or controlling pieces of critical infrastructure doesn’t map very neatly on to a lot of the technologies that are out there now. For example, what happens when you’ve got a system based on ChatGPT or an equivalent that does literally thousands of things all at once? If a business decides to use ChatGPT to quickly sort a batch of CVs, is that tool subject to the whole suite of product safety regulation? That’s a really big gap in the Act.”