Changes to Robots – How the New Framework Addresses Autonomous Systems – Part 2

02 Sep 2025
When Machines Learn – Compliance Challenges in the Era of High-Risk AI
This post continues the series on how the EU's evolving regulatory framework – especially the new Machinery Regulation and AI Act – is reshaping the way we build, validate, and manage autonomous systems.
In part 1, I touched on how certain adaptive, AI-enabled machines are now classified as “high-risk” under the new EU Machinery Regulation. This reclassification is more than just a label, it fundamentally changes how we, as compliance and safety professionals, approach system design and validation.
One of the most immediate impacts is the shift in the conformity assessment process. In many cases, machines with AI that can influence safety functions are no longer eligible for self-declared CE marking. They now require the involvement of third parties or Notified Bodies. That’s a big operational change. It adds complexity, time, and cost, but more importantly, it introduces the challenge of proving something that’s dynamic: behaviour.
Unlike traditional machines, AI-based systems can evolve based on data. This means their behaviour isn’t always fixed or predictable, which is a real issue for standards that rely on deterministic performance and testable failure modes. In practice, safety engineers are being asked to verify not just how a machine behaves today, but how it might behave under a range of conditions, some of which may not have occurred yet.
And here's the difficult part: we don’t yet have mature tools or harmonised standards that fully support this kind of validation. Existing functional safety frameworks like ISO 13849 and IEC 62061 were not designed for non-deterministic logic. They expect system behaviour to be defined, traceable, and repeatable – which is exactly what AI disrupts.
From my experience, this introduces two distinct challenges. First, there's a skills gap. Traditional safety teams may not have experience in areas like model validation, data bias analysis, or AI system interpretability. Second, there's a tooling gap. Even if we know what we want to test, the infrastructure to simulate and monitor evolving AI behaviour over time is still emerging.
Another area that’s getting overlooked is maintenance and updates. If a robot receives regular software or model updates (even over the air), how do we ensure each version stays compliant? How do we log, verify, and control changes that could affect safety logic? This is where safety begins to intersect with cybersecurity and configuration management, two areas that historically sat outside the safety team’s scope.
The AI Act adds another layer. High-risk AI systems must meet requirements for transparency, traceability, and human oversight. For robotics, that may mean rethinking not just how systems behave, but how those behaviours are communicated and overridden by operators. You need explainability, not just accuracy – a subtle but crucial difference.
In short, we’re being asked to validate systems that learn, evolve, and sometimes surprise us. That’s a far cry from validating an e-stop or a light curtain. It’s a fascinating shift, but also a challenging one.
In the next post, I’ll take a closer look at the cybersecurity dimension and how the new regulations are redefining what “safety” really means for connected, intelligent machines.