top of page

How Investors Can Assess AI Readiness: Future-Proofing Portfolios in the Age of AI

Analytics dashboard on a tablet illustrating AI readiness assessment.

Introduction

In our last article, we examined how AI dissolves moats, not by competing directly, but by breaking the assumptions embedded within them at a faster pace than ever before. The pattern is consistent: assumptions that were true for decades stopped being true faster than strategic planning cycles could accommodate.


The tempting response is to hunt for "AI-proof" moats, to identify which assumptions will hold. But this framing is itself risky. Predicting which assumptions survive requires forecasting AI capability trajectories and their second- and third-order effects with precision no one has.

The question shifts from "is this moat defensible?" to "how long is this moat defensible
and how will this company perform when the ground shifts?"

A more robust approach for investors is to assess resilience: how quickly and effectively can a company adapt when the assumptions supporting its moat stop being true? The question shifts from "is this moat defensible?" to "how long is this moat defensible and how will this company perform when the ground shifts?"

The Jagged Frontier and What It Means for Resilience

Ethan Mollick's concept of the "jagged frontier" offers a useful lens for resilience assessment. AI capabilities are uneven: superhuman at some tasks, yet still unreliable at others that seem simpler. This jaggedness doesn't always align with common intuitions about task difficulty.


For diligence purposes, jaggedness means it is impossible to predict which specific capabilities will improve, but you can predict the pattern of improvement. AI labs identify capability gaps as priorities and attack them. When they break through, everything waiting behind that gap floods forward.


The practical implication: companies whose value depends on a single assumption about what AI can or cannot do face sudden, and potentially drastic, disruption when that assumption fails. Those whose value is distributed across multiple such assumptions have more time.


How many independent assumptions does this business depend on? If one stops being true, does the entire position unravel, or just part of it?


Types of Assumptions

Not all assumptions are equal. Some are technical, related to gaps in AI capability. These are actively being attacked by AI labs and can rapidly be invalidated.


Some are institutional - about processes unrelated to AI ability. Clinical trials still need human patients. Regulators still require human review. Physical operations need physical presence. 


This is a simplification; assumptions also rest on data accumulation, adoption psychology, and economics, but the distinction captures the core question: does the company's position depend on something the AI ecosystem is actively eroding, or on something that moves at a different pace?


Determining the answer requires technical depth, and getting it wrong means mispricing the risk.


Why Resilience Assessment Requires Technical Due Diligence

Resilience is not just a strategy question with a technical component. It is fundamentally a technical question with strategic implications.


Can this organization adapt when its assumptions erode? Answering this requires understanding whether the architecture can evolve, whether the technical infrastructure supports different business models, whether data pipelines create learning loops, and whether the company depends on a single assumption or many.


Management teams will say they are resilient - but the codebase, the architecture, and a close examination of the technical organization will reveal whether that is true.


What Resilience Looks Like in Practice and How to Assess It


Assumption Awareness

In RingStone’s technical due diligence work, one of the first signals is whether management can articulate the assumptions on which their position depends. What would have to stop being true for their competitive position to erode?


But management's answer is only the starting point. A team might claim its moat is "proprietary data," when the actual assumption is that a single integration with a legacy system will remain difficult to replicate. They might believe they are protected by "workflow complexity," when closer examination reveals the complexity is organizational – rather than technical – and therefore replaceable. 


Diligence means validating what leadership believes protects the company against where its true dependencies lie.


Architecture and Adaptability

Some architectures can evolve; others are structurally rigid. The question is not just whether change is possible, but how quickly and at what cost. Conway's Law offers a useful diagnostic: software architecture reflects the team structure that built it. Companies whose architecture mirrors siloed human workflows may face higher adaptation costs. An AI agent doesn’t need the handoffs between stages that humans require. 


Boundaries often exist for legitimate reasons: scaling, compliance, and debugging. But if the answer to "Why is this boundary here?" is "Because that is how teams were organized in 2015," the system may be prone to automation and harder to evolve. 


Adaptability also depends on the surrounding infrastructure. How tightly coupled are the systems? How well automated is assurance? How quickly can changes move from idea to production? An ostensibly adaptable architecture means little if every deployment takes six months and requires manual regression testing. Any technical debt accumulated over the years becomes a constraint on adaptation speed precisely when speed matters most.


Consider deep integrations with customer systems, which have historically been a moat. For now, at least, AI struggles to see how work actually happens inside organizations: scattered across disconnected tools, gated by permissions, held together by undocumented exceptions. This acts as both protection and constraint. For example, an API architecture designed for human-speed interactions may not support high-frequency, multi-step agent workflows. Rate limits and authentication patterns designed for humans may throttle AI enhancement, protecting incumbents while also trapping them and making them slow to adapt.


For over a decade, "consumer-grade UX" has been the primary moat in B2B SaaS. But what if the primary "user" of B2B software is increasingly an AI agent making API calls? Are companies that spend 40% of R&D on front-end engineering optimising for a shrinking demographic?


AI capability improvements come in sudden and often unpredictable jumps. How quickly did the company respond to the last major capability shift? Does its architecture allow new AI capabilities to be incorporated without rebuilding? Does the technical organization have the capacity to execute rapid change, or is the backlog already measured in years?


Revenue Model Flexibility

Companies are reducing in size as they incorporate AI into their processes. This creates a challenge for those with per-seat pricing models. Many SaaS companies cannot transition away from per-seat pricing because their unit economics depend on it. Usage-based pricing partially hedges this risk but introduces new exposure: if the "usage" being measured is something AI generates more efficiently, prices are driven down toward the marginal cost of compute. Outcome-based pricing aligns vendor and customer incentives, but requires measurable outcomes that the vendor can credibly attribute to its product.


The resilience question isn't which model is right, but how locked in the company is. Sales leadership might confidently claim that the company could shift to usage-based pricing within a quarter. But can the infrastructure actually meter usage at the required granularity? Are the billing systems capable of handling variable pricing? Does the data architecture support attribution of outcomes to platform activity? Pricing flexibility is often more tightly constrained by technical capability than commercial teams realize.


Data Feedback Loops

Data has historically been seen as a moat. However, Benedict Evans argues that the data advantage is largely illusory. LLMs require vast amounts of generalized text, which is readily available, making most data effectively a commodity in the models. 


The resilience question is not whether data is valuable today, but what type of data a company holds. Only data that does not already exist in LLM models remains valuable, and that is a quickly diminishing set.  


Does the company capture user corrections to AI-generated outputs? Is that feedback loop automated? Can it generate training data that improves its specific position faster than competitors can accumulate equivalent data? Is it capturing or creating data in a unique or hard-to-replicate situation or domain? Answering these requires tracing the actual data pipelines: understanding what's captured, where it flows, and how (or whether) it is used for improvement.


Patterns Worth Watching

Rather than declaring these "durable moats," three patterns stand out where assumptions appear to erode more slowly, buying time for adaptation.


Digital-Physical Friction

Software deeply integrated with physical operations faces different dynamics than pure software. Physical operations need physical presence. Legacy systems are often undocumented and messy. Data often cannot leave the building. Change requires physical coordination, not just deployment. For many enterprise tasks, small, customized models running inside enterprise infrastructure will outperform frontier models; they're faster, cheaper, and able to operate where sensitive data lives.


Verification Over Generation

As generation costs collapse, verification value increases

As generation costs collapse, verification value increases. We used to pay for code creation; now we pay for code security and compliance. We used to pay for writing; now we pay for trusted attribution. Companies acting as a "stamp of approval", indemnifying work rather than generating it, face different competitive dynamics. The assumption here is not "AI cannot generate this" but "someone still needs to stand behind it."


Regulatory Accountability

Where regulations require human sign-off, professional liability, or audit trails, AI capability is only part of the equation. These assumptions erode at the pace of regulatory change, not AI progress.


These are not guarantees. They are indicators of where constraints may hold longer, giving companies more time to adapt.


Conclusion

Traditional technical due diligence frameworks assess scalability, technical debt, and security posture, etc. These remain important, but they no longer answer the questions that matter most: what assumptions is this company's moat built on, and how will it perform when the assumptions fail?


Answering these questions requires examining assumption awareness, architectural adaptability, revenue model flexibility, data, and data feedback loops. It also means understanding assumption diversity: does the business depend on one thing remaining true or many, technical or institutional, and does management know which?


Each of these requires understanding the company at the level of architecture, data flow, and implementation capability. This is where technical due diligence becomes essential - not as a checklist confirming the technology works, but as the foundation for understanding whether the organization can adapt when it needs to.


Shutterstock's moat was real. What it lacked was the ability to pivot fast enough when the underlying assumptions shifted. The question for every portfolio company now is not just "what would have to stop being true for this moat to become irrelevant?" It is "how quickly could this company respond when it does?"


The question is no longer whether assumptions will break, but how prepared companies are when they do.


About the Author

Kari Dempsey has spent 20+ years working in the technology sector as a CTO, Chief Data Scientist, Data Engineer, and Machine Learning Engineer, building and scaling AI and data solutions - and teams - across the commercial, defence, and government sectors in the UK.


As a CTO practitioner at RingStone, she works with private equity firms globally in an advisory capacity, conducting technical due diligence on AI and technology companies for both buy-side and sell-side transactions. She serves as Director of AI and subject matter expert, assessing AI maturity, technical risk, and scalability across a range of sectors, including SaaS, deeptech, robotics, and energy.


Prior to her work in diligence, Kari served as CTO at a marine robotics and data company, delivering world-firsts in subsea autonomy, leading technology through multiple funding rounds, and scaling distributed teams of 200+ professionals across multiple countries. Before that, she served as Deputy CTO and Chief Data Scientist at the UK Hydrographic Office (UK Ministry of Defence), where she built AI capability from inception into an award-winning, sustainable function.


Kari holds a PhD in Condensed Matter Physics and brings deep expertise in data engineering, AI, MLOps, cloud architecture, and technology strategy to her advisory work.

bottom of page