Back to Learn
AI Strategy6 min read

The Real AI Moat: Why Verification Infrastructure Wins

Everyone's racing to build AI agents. But the economics of AGI suggest the real competitive advantage isn't in generating output - it's in verifying it. What that means for fintech and payments.

JF

Jeff Forkan

February 26, 2026

7-9 minute read

The Real AI Moat: Why Verification Infrastructure Wins

Christian Catalini’s recent paper on the economics of AGI makes an argument I can’t stop thinking about. The real dividing line in an AI-saturated market isn’t between tasks AI can do and tasks it can’t. It’s between tasks whose output is cheap to verify and tasks where verification remains expensive and human-bound.

That reframes where durable business value actually sits. And for anyone building in fintech, the implications are uncomfortable.


The measurability gap

Two cost curves are moving in opposite directions. The cost to automate cognitive work is collapsing. SWE-bench went from 4.4% to 71.7% accuracy in a single year. Text, images, code, analysis - the cost of producing output is approaching zero.

The cost to verify whether that output is correct? Barely moved. Checking that a legal contract doesn’t contain a hidden liability, that a financial transaction clears sanctions, that a cross-border payment was routed through the right corridor - these still need human expertise, institutional knowledge, and regulatory authority.

Catalini calls this “The Measurability Gap.” As intelligence gets cheaper, verification becomes the scarce resource. And the scarce resource is the one with pricing power.


Four economic zones

The framework maps tasks into four zones based on automation cost vs. verification cost.

Safe Industrial Zone - cheap to automate, cheap to verify. Chatbots, image generation, short code. AI delivers value here but margins are thin. Everyone can do it.

Human Artisan Zone - hard to automate, cheap to verify. Physical craft, creative work with clear quality signals. Humans keep their edge.

Pure Tacit Zone - hard to automate, hard to verify. Intuition, judgment under genuine uncertainty, relationship-dependent decisions. AI struggles, and humans can’t easily prove they’re right either.

Runaway Risk Zone - cheap to automate, expensive to verify. AI produces volumes of output that looks correct but is hard to check. Financial transactions, compliance determinations, medical recommendations, legal analysis.

The Runaway Risk Zone is where things get interesting. If you can solve verification in a high-stakes domain, you’re providing something AI alone cannot. That’s where the valuable companies get built.


The AI sandwich

Catalini describes a three-layer model that maps well to how high-stakes operations actually work in practice.

Top layer: human intent. A CFO decides the company needs to move $2M from a Euro account to fund operations in three Asian markets by Friday.

Middle layer: AI execution. Agents analyze FX rates across corridors, check compliance for each destination, find the optimal payment rails, calculate timing to minimize conversion costs, prepare the transaction package.

Bottom layer: human verification. A treasury analyst reviews the proposed routing, confirms compliance flags are clear, validates rate locks, approves execution. Anomaly detection flags anything unusual.

Here’s what grabbed me: verification isn’t a compliance checkbox. It’s a primary production technology. The AI middle layer is valuable precisely because verification makes its output trustworthy. Without that bottom layer, you have plausible-looking output that nobody should rely on for anything consequential.


What this means for payments

Payments infrastructure sits right in the Runaway Risk Zone. Moving money cross-border involves sanctions screening, multi-jurisdiction compliance, FX risk, counterparty verification, audit trails. AI can process these at scale, but every transaction needs verification infrastructure to be trustworthy.

A few things follow from this.

The regulatory moat is actually a verification moat. Companies holding money transmission licenses, maintaining sponsor bank relationships, passing compliance audits - they’ve built verification infrastructure that AI-native startups can’t replicate with compute. A language model can draft a compliance determination. It can’t hold a license or absorb liability.

AI agents need verification rails to be useful. An agent that suggests an optimal payment route is a nice demo. An agent operating within a verified compliance framework, with human oversight on anomalies and audit trails on every decision, is the product enterprise customers will trust with their money. The difference between those two things is the entire business.

“Liability-as-a-Service” is emerging as a real category. If verification is scarce and AI execution is cheap, then companies that productize verification and absorb liability are sitting on something structurally valuable. Compliance engines, risk scoring, audit infrastructure, approval workflows - call them what you want, but they’re the moat.


The missing junior problem

One thing that stuck with me from Catalini’s paper: if entry-level tasks get automated, where do future verifiers come from? Employment in AI-exposed fields for early-career workers has dropped about 16%. Companies are cutting the pipeline that produces the experienced humans who eventually verify AI output.

He calls it “The Missing Junior Loop.” Organizations investing in training aren’t just being generous. They’re protecting their future ability to verify, which is the thing that makes their AI systems worth anything.

In fintech, the companies maintaining strong compliance teams and investing in analyst development aren’t burning money. They’re accumulating the scarcest asset in an AI-abundant world. I think a lot of CFOs are going to figure this out the hard way.


Pricing implications

If verification is scarce, pricing should reflect that. Traditional SaaS charges for access. But when AI can generate output for nearly nothing, access to execution isn’t what’s valuable. Verified outcomes are.

What that might look like:

  • Charge on verified payment success rates, not transaction volume
  • Price compliance by liability absorbed, not checks performed
  • Bundle audit infrastructure and anomaly detection as core, not upsells
  • Treat verified, auditable FX savings as premium over unverified rate quotes

Catalini calls this “Software-as-Labor” - pricing on work the system reliably completes, not access to the tool. I think this framing is going to reshape how B2B fintech gets priced over the next few years.


The stablecoin connection

Private stablecoin issuance gets more interesting through this lens. Stablecoins are programmable money. Programmable money needs programmable verification. The issuer isn’t just creating a digital asset - they’re creating a trust layer backed by reserves, compliance, and accountability.

When AI agents manage increasingly complex financial operations, the combination of programmable money and verification infrastructure is something pure AI players can’t replicate. The stablecoin itself becomes a verified instrument inside a verified system.


Where this goes

The companies that compound value in an AI-abundant world won’t be the ones generating the most output. They’ll be the ones running the verification infrastructure that makes AI output trustworthy where it matters.

For payments and treasury, the practical playbook:

Build verification into the product, not around it. Audit trails, approval workflows, compliance engines, anomaly detection. These are the product, not overhead.

Treat regulatory infrastructure as a competitive asset. Licenses, sponsor bank relationships, compliance certifications. They don’t depreciate as AI gets better. If anything, they appreciate.

Invest in human expertise. The verification layer depends on people who understand the domain deeply. Cutting that investment to save costs today creates a gap you can’t fill with compute tomorrow.

Price for verified outcomes. As execution gets cheaper, the premium shifts to trustworthy results.


Final thoughts

The economics of abundant intelligence are counterintuitive. When everyone can generate output, the bottleneck is knowing whether it’s correct. In domains where being wrong means lost money, regulatory violations, or broken compliance chains - verification infrastructure is the product.

Everyone’s racing to build agents. The harder question, and the more valuable one: how do you make those agents trustworthy enough to handle things that actually matter?