AI makes the surface area of plausible venture claims expand faster than the supply of affordable verification. That gap is where LP-side technical underwriting should live.
Working thesis
The scarce asset in AI-era venture is not more narrative; it is independent verification that can survive a long feedback loop.
Confidencemedium
Decision contextLP re-ups, pacing, secondaries, and mark credibility
Claim generation
cheap
AI lowers the cost of memos, demos, dashboards, and plausible market maps.
Feedback loop
years
Venture outcomes resolve after marks, narratives, and re-ups have already moved.
Diligence value
50–200 bps
A plausible annualized wedge when research influences locked-up capital.
Interactive instrument
What can an LP rationally spend to verify a venture commitment?
A small Berk-DeMarzo-inspired calculator. The point is not precision; it is scale. If research changes a large locked-up commitment by even a few percentage points, serious diligence spend becomes economically legible.
At a $100M commitment and a 75 bps/year verification wedge, the present value of diligence is $5.9M. It breaks even if it improves the commitment outcome by 5.9% or more.
Every market has a preferred fiction.
In venture capital, the fiction is that uncertainty eventually resolves cleanly. A company raises, grows, marks up, exits, distributes cash, and then everyone learns whether the original story was true.
That is sometimes how it works. But most of the time, the learning loop is slower and messier. The capital is committed before the truth is known. The re-up arrives before the old fund has fully seasoned. The NAV moves before distributions do. The AI narrative changes faster than customer retention, gross margin, or product defensibility can be observed.
Now add AI.
AI makes it much cheaper to produce plausible claims: memos, market maps, product demos, benchmark tables, customer-support veneers, code-velocity metrics, category narratives. It does not automatically make the underlying economic truth cheaper to verify.
That difference is the measurability gap.
Working definition. The measurability gap widens when the cost of producing a credible-looking claim falls faster than the cost of verifying whether the claim is economically true.
For LPs, this is not an abstract epistemology problem. It is a capital-allocation problem. If claims get cheaper while verification remains expensive, then a greater share of private-market value will depend on who can tell the difference between technical reality and narrative surface area.
Join the Syndicate
Long-form essays on the economics of tech, with the data to back it up. All signal, no noise.
Read by 5,000+ founders, engineers, and investors.
The puzzle: AI cheapens claims before it cheapens truth
The simplest story about AI and venture is that AI makes everyone more productive. If that were the whole story, the underwriting problem would get easier. Better tools, better data, better search, better analysis.
But that misses the adversarial side of the technology.
AI reduces the cost of producing the things investors and LPs use as proxies for truth:
credible memos;
polished dashboards;
working-looking demos;
synthetic customer research;
benchmark comparisons;
market landscapes;
technical explanations;
and fundraising narratives.
Some of those outputs will reflect real progress. Some will be cheap simulation. The problem is that both can look increasingly similar at the surface.
So the wedge is not:
Can AI help investors analyze companies?
Of course it can.
The sharper wedge is:
Does AI reduce the cost of verification as quickly as it reduces the cost of persuasion?
In long-loop domains, I think the answer is no.
Primer: four ideas the rest of the argument uses
Concept 01
Long-loop verification
Some claims only resolve after years. Venture is full of them: GP skill, exit quality, mark credibility, and whether a company survives the next platform shift.
Concept 02
Proxy inflation
When hard outcomes are unavailable, markets lean on proxies. AI makes many proxies easier to manufacture, which can increase confidence faster than truth.
Concept 03
Information rent
Berk-DeMarzo’s private-market logic says costly LP-side information production can be economically valuable when it changes access, terms, or allocation.
Concept 04
Technical substrate
In AI markets, many economic claims depend on technical facts: model capability, workflow fit, data rights, latency, eval quality, switching costs, and margins.
Concept 05
Mark credibility
NAV is not cash. In weak distribution environments, LPs need a separate view of whether reported marks are realizable, stale, or narrative-dependent.
Concept 06
Decision linkage
Research matters commercially only when it changes an action: re-up, pass, pace slower, sell, buy, monitor, challenge, or resize.
A useful shorthand:
That is not meant to be a literal econometric object. It is a design constraint. If the numerator is expanding quickly and the denominator is not, then capital allocators should expect more false confidence.
The naive model: more information means better underwriting
The naive AI bull case for investing is straightforward:
AI gives LPs and investors more information.
More information improves analysis.
Better analysis improves allocation.
There is truth in this. Search is better. Summarization is better. Document processing is better. It is easier to compare claims across memos, transcripts, websites, filings, code repositories, and market data.
But this model quietly assumes that information is mostly truth-bearing.
In public markets, that assumption is often disciplined by price, liquidity, disclosure, short sellers, and frequent feedback. In venture, the discipline is weaker. The most important claims are often about things that cannot be observed cleanly yet.
Is the AI product actually changing customer behavior?
Is the benchmark meaningful or selected for fundraising optics?
Is the gross margin story durable after inference costs, support, and churn?
Is the GP early to a real technical shift or simply fluent in the current vocabulary?
Is the reported mark realizable in a cash exit, or only in a stale private round?
AI can help analyze these questions. It can also help people produce better-looking weak answers to them.
The failure mode: proxy confidence compounds faster than ground truth
The dangerous case is not obvious fraud. It is proxy confidence.
A founder can show a better demo. A GP can write a sharper AI thesis. A portfolio company can describe its roadmap in language that sounds technically current. A market map can become more comprehensive without becoming more predictive. A benchmark can be directionally relevant without measuring the capability that matters commercially.
The surface improves. The underlying verification problem remains.
The table is the core of the problem. The left and middle columns are becoming easier to produce. The right-middle column is still expensive.
That is why the answer cannot merely be “use more AI in diligence.” The better answer is to design a research process around the parts AI does not automatically verify.
When is verification worth paying for?
The interactive model above is deliberately simple. It asks a basic question: if an LP has a meaningful venture commitment, how much diligence can be rational before it becomes economically silly?
The answer depends on four variables:
the capital influenced;
the annual verification wedge;
the information horizon;
and the degree to which better research changes the eventual outcome.
Berk and DeMarzo are useful because they make LP-side information production feel less like overhead and more like a necessary wedge in private-market contracting. In opaque, illiquid markets, informed investors can rationally spend real resources to become informed because the information changes allocation, access, or economics.
This is the business-design translation:
If research changes a $100M commitment by 4%, that is $4M of value at stake. A diligence process with a multi-year present value below that threshold is not obviously expensive. It is cheap if it prevents a bad re-up, identifies a stale mark, or supports a secondary decision.
This is also why generic content is not enough. A broad essay can create attention. A decision-linked underwriting process can create willingness to pay.
What technical underwriting should actually verify
The phrase “technical underwriting” can sound vague, so it needs a sharper object.
For AI-era venture portfolios, I would start with five verification targets.
1. Product reality
Does the product do something technically meaningful, or is it mostly a wrapper around a capability that will become abundant?
This requires using the product, talking to technical users, understanding the workflow, and separating demo quality from production value.
2. Evaluation quality
Are the company’s benchmarks or customer proof points measuring what matters?
A benchmark can be true and still irrelevant. A pilot can be real and still not predict deployment. A model can perform well in a narrow eval and still fail in the messy distribution of customer work.
3. Margin structure
AI revenue quality depends on inference costs, support burden, human-in-the-loop requirements, data processing, and pricing power.
A company can grow quickly while quietly renting too much of the value from model providers or cloud infrastructure.
4. Competitive durability
The key question is not “does AI help?” It is “does AI make the company more defensible?”
Sometimes AI strengthens a workflow moat. Sometimes it compresses differentiation. Sometimes it moves value to the model layer, the distribution layer, or the system of record.
5. Mark-to-cash path
For LPs, the final question is not whether a company sounds important. It is whether the reported value can become cash.
That means exit buyer universe, secondary appetite, revenue quality, burn, category consolidation, and strategic relevance all matter.
The operating model: technical truth translated for capital
The business implied by this argument is not a conventional research newsletter.
It is closer to a small technical-underwriting lab that converts frontier technical contact into capital-allocation decisions.
Portfolio maps, company notes, technical-to-economic translation, scenario analysis, and explicit judgment.
Layer 03
Decision
Re-ups, pacing, secondaries, watchlists, mark credibility, commitment sizing, and AI exposure review.
The order matters. If the firm begins with LP relationship management, it will drift toward service work. If it begins with technical truth and converts that truth into LP decisions, the technical edge remains the production function.
A practical decision rule
The first version of the rule is simple:
Pay for verification when three things are true: the capital influenced is large, the feedback loop is long, and the technical claim is decision-relevant.
That implies a clear priority stack.
Highest priority
high-NAV, low-DPI venture funds;
upcoming re-up decisions;
AI-heavy portfolios with uncertain moat exposure;
secondary sales or purchases where marks may be stale;
GPs whose current strategy depends on technical claims that LPs cannot evaluate internally.
Lower priority
small commitments that cannot change portfolio construction;
generic market commentary;
claims with abundant public verification;
technical themes that sound interesting but do not affect a live decision.
This is the line between a research business and an underwriting business.
A research business helps people understand the world. An underwriting business helps people decide what risk to bear.
The claim to watch
The strongest form of the thesis is this:
AI will not just change which companies win. It will change which claims are cheap to make, which claims are hard to verify, and therefore where trust has economic value.
If that is right, LPs should not treat technical diligence as a nice-to-have. They should treat it as a response to a structural change in the information environment.
The world does not need more venture content that says AI is important. It needs better instruments for deciding which AI-shaped claims are true enough to underwrite.
That is the measurability gap. And it is investable only if it is verifiable.
Join the Syndicate
Long-form essays on the economics of tech, with the data to back it up. All signal, no noise.
Read by 5,000+ founders, engineers, and investors.