top of page

Your Moat Isn’t as Defensible as You Think - AI is Breaking Its Hidden Assumptions

Updated: Jan 21

AI analyzing business data dashboards, illustrating how artificial intelligence is undermining traditional competitive moat assumptions.

Introduction

Competitive moats are meant to protect value. Increasingly, they’re failing in ways that traditional analysis doesn’t anticipate - because AI is quietly invalidating the assumptions embedded within them

Competitive moats are meant to protect value. Increasingly, they’re failing in ways that traditional analysis doesn’t anticipate.


For investors and diligence teams evaluating tech or tech-enabled businesses, the challenge isn’t identifying moats; it’s understanding the assumptions embedded within them and how AI is quietly invalidating those assumptions.


The Moat That Worked Perfectly (Until It Didn't)

Shutterstock spent two decades building what looked like an impregnable position. It had network effects (more contributors attracted more buyers), scale economies (spreading infrastructure costs across 450+ million assets), and high switching costs (enterprise workflows built around its API).


Competitors tried to displace it. Getty Images threw billions at the problem, Adobe Stock tried to replicate it, but neither displaced the leader. By any standard framework, Hamilton Helmer's 7 Powers, Buffett's moat criteria, Shutterstock had genuine structural advantages. That is the textbook definition of a moat.


Then, between 2021 and 2024, the stock dropped around 75%. As a backdrop, tech stocks crashed in 2022, but when the rest of the tech sector roared back in 2023 and 2024, Shutterstock kept falling. What happened? In mid-2022, image generative AI models became widely available, changing how images were created and sourced.

The moat didn’t leak, and competitors didn’t win. Instead, the economics
that made it valuable shifted underneath it

The moat didn't leak, and competitors didn't win. Instead, the economics that made the moat valuable shifted underneath it.


Shutterstock's defensibility was built on image creation being expensive. Professional photography requires equipment, skill, time, and travel. Aggregating that expensive output created genuine value. Generative AI collapsed creation costs to near zero. Customers still license images, but increasingly they generate purpose-fit images on demand rather than searching a library for close matches.


The assumptions underpinning the moat weren't wrong; they were true for decades. They just stopped being true, faster than anyone's strategic planning cycle could accommodate.


Every moat has assumptions like this embedded within it. Assumptions about what must remain expensive, what must remain difficult, what must remain human. AI is stress-testing these assumptions simultaneously across industries.


What Moats Actually Defend Against

The standard definition of a moat (a sustainable competitive advantage that protects market position) omits a critical distinction. Moats defend against competitors playing the same game. They rarely defend against technologies that change the game entirely. 


These failures often come from a different vector: technologies that make the defended game less relevant. Not better stock photography, but image generation. Not better homework databases, but reasoning engines. Not better offshore call centres, but autonomous AI agents.


These shifts fundamentally invalidate the core assumptions of the moat. For investors, the challenge isn't that the moat concept is flawed, but that AI has made identifying a "true" moat significantly harder.


This Is the Innovator's Dilemma. So Why Is It Different?

The pattern is familiar; Kodak had a genuine moat in film, Nokia had a genuine moat in mobile hardware, Blockbuster had a genuine moat in retail distribution. Each defended successfully against direct competitors but was blindsided by technologies that changed the game.

This is the classic innovator’s dilemma - but AI makes it harder to defend against,
compressing time and exposing a cognitive blind spot.

This is the classic innovator’s dilemma: incumbents defending their position while missing shifts that redefine the market. So why is this something different, something even harder to defend against? 


Two things seem genuinely distinct about AI disruption: speed compression and a cognitive blind spot.


Speed Compression

The classic innovator's dilemma plays out over years: Netflix was founded in 1997, Blockbuster went bankrupt in 2010. The iPhone launched in 2007, and Nokia's smartphone market share collapsed by 2013. Digital cameras entered the mass consumer market in the 1990s; Kodak went bankrupt in 2012. Incumbents had a decade to see it coming and still failed, but they had time to try.


Chegg, once the leading online educational platform, lost more than 90% of its value in under two years. Teleperformance crashed by around 27% in a single day on a client's press release. The strategic planning cycle that might have worked for previous disruptions simply doesn't exist here.


Why faster? Previous disruptions required building physical infrastructure: retail networks, manufacturing capacity, supply chains, and device ecosystems. AI disruption is software-native, accessible via API. ChatGPT went from zero to 100 million users in two months. The capability appeared and scaled simultaneously. 


This simultaneous impact is also occurring across multiple domains all at once, creating a complex picture to follow, as it obscures a deeper shift: we are no longer just disrupting how products are delivered, but the very assumption that cognitive tasks require human cognition.


The Cognitive Blind Spot

Previous disruptions were easier to see conceptually, even if incumbents failed to respond. “Movies delivered by post instead of in-store” is a different delivery mechanism. “Phones with touchscreens instead of keyboards” is a different form factor. You could articulate the threat even if you misjudged its trajectory.


AI disruption has been harder to conceptualize because it challenges an assumption so deep we didn't recognize it as an assumption: that cognitive tasks require human cognition. We instinctively categorize “answering homework questions” or “writing marketing copy” as things humans do, not as processes that could be automated.


This isn't irrational; for all of human history, it was true. Pattern recognition, language generation, and contextual judgment were exclusively human capabilities. When a technology suddenly performs these tasks adequately, it doesn't feel like “disruption” in the familiar sense. It feels like a category error. 


At this point, we have come to take for granted that AI can be used as a cognitive component within systems, and have begun to relate this to companies and outcomes. However, it is still challenging. What is harder still is trying to see the second- and third-order effects of these technologies and where they lead us.


The Implications for Due Diligence

Most sophisticated investors have added AI to their risk frameworks. The question “what's your AI exposure?” now appears on nearly every due diligence checklist. However, what does that question mean in practice? You cannot assess AI risk without understanding the technology itself, and the nuances matter enormously.


It requires an understanding of the latest in AI, a vision of the direction of travel, and a breakdown of the company into its constituent parts. This analysis must determine whether the business is vulnerable to substitution, whether AI strengthens its moat, and critically, whether it's positioned to capitalize on the opportunity.


Consider the difference between two portfolio companies, both flagged as having 'AI exposure in customer service.' One operates in tier-1 support (password resets, order tracking, FAQ responses). The other handles complex B2B technical support requiring deep product knowledge and multi-step troubleshooting. Both get the same risk flag. But the first faces near-term substitution by AI-native competitors, while the second benefits from AI enhancement of their own offering.


Consider also the “platform risk” vector: the expansion of the foundation model companies themselves. Compare two startups building “agentic” interfaces. One builds horizontal capabilities: “memory,” “web browsing,” or “PDF chat.” The other builds deep vertical orchestration: automating the specific, messy, 50-step workflow of a supply chain audit. The first is effectively squatting on OpenAI, or Anthropic’s product roadmap, each release replicates what another category of wrapper startups was selling. The second, however, uses the model only as an engine, protected by the friction of integrating with legacy ERP systems and specific domain knowledge.


Or consider two companies, both claiming “proprietary data moats.” One has structured instruction data - expert input/output pairs showing how domain experts solve real problems, with corrections and refinements. The other has a large volume of raw documents. The first could potentially use that data to improve the models to outperform generic alternatives. The second may simply have a storage cost but no strategic asset.


These distinctions, and many others, require a deep understanding of the technology and product, and a clear, technologically driven assessment of the assumptions underpinning the business model and moat.


The Taxonomy of Vulnerable Assumptions

Every moat embeds assumptions about what must remain true. We can see them clearly in the casualties:


“Creation will remain expensive.”

Sectors affected: stock media, template marketplaces. 


GenAI collapsed creation costs. The aggregation layer remains, but its pricing power has evaporated.


Answers must exist already and be stored.”

Sectors affected: education (Chegg), developer Q&A (Stack Overflow). 


These businesses assumed knowledge needed to be stored before it could be accessed. LLMs don't retrieve answers; they generate them. 


“Service requires human labour.”

Sectors affected: business process outsourcing, customer support. 


When AI can perform the work of hundreds of human agents at a fraction of the cost, the demand for outsourced labor disappears. The moat of "global headcount" becomes a liability of "overhead".


“Attention must flow through intermediaries.”

Sectors affected: SEO-dependent publishers, affiliate sites. 


AI Overviews answer the query on the search page. Zero-click searches increase; referral traffic collapses.


“Workflows are intellectual property.”

Sectors affected: workflow automation tools, rules-based SaaS, low-code/no-code platforms.


Products where the core value is encoded in explicit business rules - “if this then that” logic - face a new vulnerability. An LLM can observe a workflow, deduce the logic, and replicate it. The workflow is no longer IP; it's a prompt.


“The GUI is the primary delivery mechanism for value.”

Sectors affected: BI tools, analytics platforms, management dashboards.


Products where the primary value is “visualizing data” for a human manager face pressure, as AI managers read JSON, pie charts are not important. The GUI becomes overhead rather than an asset.


The Diligence Gap

Every example above follows the same pattern: the moat didn’t fail, and competitors didn’t win. Instead, the assumption about the moat being valuable stopped being true.


Identifying vulnerable assumptions is the first step. The harder question is what to do about it, which assumptions are holding, which new moats are forming, and how to access portfolio companies before the market reprices the risk. 


In our next blog, we’ll examine the assumptions AI isn't breaking (yet), what technical due diligence should actually examine in the age of agentic AI, and provide a practical framework for distinguishing companies building on dissolving foundations from those creating compounding advantages.


About the Author

Kari Dempsey has spent 20+ years working in the technology sector as a CTO, Chief Data Scientist, Data Engineer, and Machine Learning Engineer, building and scaling AI and data solutions - and teams - across the commercial, defence, and government sectors in the UK.


As a CTO practitioner at RingStone, she works with private equity firms globally in an advisory capacity, conducting technical due diligence on AI and technology companies for both buy-side and sell-side transactions. She serves as Director of AI and subject matter expert, assessing AI maturity, technical risk, and scalability across a range of sectors, including SaaS, deeptech, robotics, and energy.


Prior to her work in diligence, Kari served as CTO at a marine robotics and data company, delivering world-firsts in subsea autonomy, leading technology through multiple funding rounds, and scaling distributed teams of 200+ professionals across multiple countries. Before that, she served as Deputy CTO and Chief Data Scientist at the UK Hydrographic Office (UK Ministry of Defence), where she built AI capability from inception into an award-winning, sustainable function.


Kari holds a PhD in Condensed Matter Physics and brings deep expertise in data engineering, AI, MLOps, cloud architecture, and technology strategy to her advisory work.

bottom of page