Shutterstock.com_2414949515/PST Vector
12 November 2025FeaturesTechnology

Forget checkboxes: Design the rules so that AI doesn’t run off course

 Real AI governance starts with design, not compliance, says CTO Andrew Clark.

When it comes to artificial intelligence, many organisations want to sprint before they’ve learned to walk. Yet too often, AI governance is treated as a mere box-ticking exercise. Andrew Clark, co-founder and CTO of Monitaur, warns that in the race to deploy, firms are skipping the essential steps that make systems resilient.

“People tend to ask, ‘what do we need to do to be compliant with regulation XYZ?’ But that misses the point. Governance isn’t just about compliance – it’s about performance.”

“You can’t just bolt on validation later,” he told Intelligent Insurer. “Good governance creates great systems – but AI governance is still treated as an afterthought     .”

Clark believes companies must view governance as a design discipline, not a compliance burden. “People tend to ask, ‘what do we need to do to be compliant with regulation XYZ?’ But that misses the point,” he said. “Governance isn’t just about compliance – it’s about performance.”

Andrew Clark will join the line-up of speakers to debate the opportunities and challenges of Agentic and Generative AI for insurance at Intelligent Insurer’s Agentic and Generative AI for Insurance event on November 18, 2025.

Start small, build strong

The fundamentals of good governance mirror those of resilient systems: clarity of purpose, well-designed processes, and a strong grasp of risk. Clark compared it to athletics.

“Everyone wants to use AI everywhere – and that’s good – but there’s a reason they say couch to 5K, not couch to marathon,” he said. “You must start small and build up the basics. The best in any field are the best at the basics.”

This discipline, he noted, comes from systems engineering – a field that has much to teach AI developers. “Think of civil engineering or NASA during the Apollo programme,” Clark explained. “They built enormously complex systems with limited computing power. They succeeded because they understood how every component fit into the whole. Systems engineering is about breaking big problems into manageable pieces, testing them, and then fitting them back together for seamless end-to-end validation of the entire system.”      

The illusion of ease

The accessibility of today’s AI tools has created what Clark called a “false sense of security”.

“Because these systems are easy to use, people assume they just work,” he said. “That’s a double-edged sword – you start trusting them too much.”

By exploring cross-disciplinary practices in control theory and mechanism design, he sees similar opportunities for AI systems to benefit from these practices, ensuring responsible development.   “Game theory helps us understand how players behave; mechanism design is about setting the rules of the game to get the outcomes you want,” Clark said. “Control theory, meanwhile, is about feedback and correction – like cruise control in your car. These principles help us think about how AI can self-regulate and stay aligned with human goals.”

Design before deployment

That’s especially important as “agentic” AI systems – those that act on behalf of humans – proliferate. “Telling your company to ‘do more AI’ or ‘use more agents’ isn’t a strategy,” Clark warned. “An agent doesn’t magically know what it should do. You have to design where and how it acts.”

For Clark, embedded governance means building checks and balances from day one: defining risk thresholds, stage gates, and feedback loops that confirm the system behaves as intended. “You can't fully trust prototypes that have been developed with governance as an afterthought or nice-to-have,”     ” he said. “Proper governance feels slower at first, but it actually gets you to market faster – with better systems.”

Beyond monitoring

Even within the engineering process, Clark cautions against confusing monitoring with governance. “Just logging data doesn’t mean you have control,” he said. “If an alert is triggered, do you know what to do? Do you have a process for acting on it? Buying a monitoring tool doesn’t magically give your system governance.”

That lesson applies equally to human oversight. “People love to say, ‘we have a human in the loop’ – but that’s not risk mitigation if the human never overrules the AI, or has to justify it when they do,” he added.

True governance, he emphasised, “is about having the right processes in place to mitigate risk – not just ticking boxes.”

Governance as a creative framework

As AI development becomes increasingly interdisciplinary, collaboration between research, product, and risk teams is vital – but too often, these groups operate in silos. “If you leave governance until the end, it becomes a checkbox exercise that annoys everyone,” Clark said. “But if you embed it early, it makes your systems perform better, go to market faster, and build trust.”

“Soft skills are the hard skills now,” he added. “You need synergy between research, product, and engineering – and governance helps orchestrate that.”

Ultimately, Clark sees AI governance not as bureaucracy, but as a creative, strategic framework for value creation. “It’s about coordinating all the activities that produce the most value-added, risk-mitigated systems for the business,” he concluded.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.