Most AI MedTech companies are doing the right things, just in a way regulators will not recognise.
Internally, everything feels controlled; externally, it will be judged by a very different standard. That gap widens fastest in companies moving quickly, led by capable teams who assume alignment will catch up later.
But it rarely does.
How Sensible Decisions Accumulate Risk
Inside the organisation, the logic makes sense. The AI improves, clinicians find it useful, adoption grows, and nothing appears unsafe. Decisions are made by thoughtful people who understand the product, the context, and the stakes. There are meetings, documents, and a general sense that governance exists, even if it does not always slow things down.
From the inside, this feels like responsible progress.
The problem is not what is being done, but where the control actually lives. Much of it sits in shared understanding, judgement calls, and conversations between people who are close to the work. That works remarkably well while the same people remain in the room and the context is fresh.
The difficulty appears later, when the context is stripped away.
The Moment Judgement is Tested
Imagine a small, sensible improvement – a tweak to how the system summarises a consultation. The output becomes clearer, clinicians prefer it, and no one believes it alters what the product is for. The change is discussed, agreed, and released without fuss.
A few weeks later, another refinement follows…and then another. Each one improves the experience and feels proportionate. Nothing about this feels reckless.
Six months on, a question arrives from outside the company. It might be a procurement review, an investor diligence request, or the early stages of regulatory scrutiny. The question is simple enough: how do you control changes to the AI, and how do you assess their impact on risk?
At that point, the answers still exist, but they are no longer in one place. One person checks tickets, whilst someone else looks through commit history. Another person tries to remember how a particular change was framed at the time. The judgement was real, but it lives across people, systems, and memory.
What felt like control now looks like reconstruction.
This distinction matters more than most teams expect. Reconstruction relies on goodwill and explanation. Control relies on evidence that stands on its own. Regulators, auditors, and diligence teams are trained to tell the difference very quickly, especially when the system in question is influencing real clinical workflows.
Why the gap rarely closes by itself
This is where many otherwise well-run AI MedTech companies find themselves slightly wrong-footed. Not because they cut corners, but because they optimised for building something that worked, before fully designing for how that work would be examined under stress.
The deeper issue is that two different worlds are at play. Inside the company, teams optimise for speed, performance, and usefulness. Outside, the world optimises for traceability, intent, and demonstrable control. Both perspectives are rational and both are reasonable. But alignment between them does not happen by accident.
It has to be designed.
Waiting feels sensible. There is always a feature to ship, a pilot to support, a market to enter. Formalising everything too early feels heavy, even conservative. The unspoken belief is that this can be tidied up later, once the product and the business are further along.
The uncomfortable truth is that later is usually harder. The more the system evolves, the more expensive it becomes to re-establish a clean line between how the AI changes, how risk is assessed, and how that story would be defended if challenged. What began as pragmatism slowly turns into fragility.
This tension is rarely owned by one person. It tends to sit quietly between leadership roles. Technical teams feel it when they sense that good engineering decisions may not be legible to outsiders. Quality and regulatory leaders feel it when documentation lags reality. Founders feel it when confidence about progress coexists with a nagging uncertainty about scrutiny.
No one is failing nor are they reckless, but the gap remains.
The encouraging news is that this is not a moral problem or a talent problem. It is an architectural one. Companies that recognise it early can design their way out of it, creating governance that supports speed rather than suffocating it, and systems that remain intelligible even when examined months or years later by someone who was never in the room.
That work is rarely done alone. It usually requires stepping outside the internal logic of the company and looking at the system as it will be judged, not as it currently feels. When done well, it becomes a source of confidence rather than constraint, allowing leaders to move faster precisely because they know the foundations will hold.
The companies that struggle most are not the ones moving quickly, but the ones who assume that speed and future scrutiny will eventually reconcile themselves.
They rarely do.
#MedTechLeadership #AIGovernance #BoardroomDecisions #RegulatedAI #AIinHealthcare
FAQ
“Are you saying we’re doing something wrong?”
No. Most teams I see are thoughtful, capable, and acting in good faith. The issue isn’t behaviour, it’s interpretation. What feels controlled and proportionate inside the organisation is not always legible when examined later by people whose job is to assume nothing and test everything.
“Isn’t this just a matter of adding more process?”
Rarely. More process often creates friction without clarity. The real shift is architectural. It’s about designing systems so that decisions, changes, and trade-offs remain intelligible under pressure, even to someone who was never part of the original discussion.
“Can’t we address this once we’re further along?”
You can, but it’s seldom cost-free. The longer a system evolves without a clear, end-to-end line between how it changes and how those changes are governed, the harder it becomes to establish confidence later. What feels efficient now often becomes brittle when scrutiny arrives.
“So what do well-run companies do differently?”
They design for judgement, not just delivery. They assume their work will eventually be examined by someone sceptical, time-poor, and accountable. When governance is built with that audience in mind, teams tend to move faster, not slower.

Leave a comment