Where are we at?
AI has shifted from a loose ecosystem into a handful of end-to-end platforms. A small group now controls chips, models and the interfaces where users live. That concentration changes where margins accrue and how power flows. For venture investors, the implication is simple: assume value leakage to the platform owners and invest accordingly. The winning playbook is tighter, more technical, and more deliberate about compute, distribution and exit paths.
Why it matters
Vertical integration of AI has resulted in two critical consequences:
Margin siphoning: When a platform controls supply (chips and cloud) and distribution (default apps and SDKs), it can capture a greater share of every downstream dollar. Nvidia’s proposed investment of up to $100bn in OpenAI, tied to multi-gigawatt deployments of Nvidia systems, is a clear example of vertical ties strengthening both supply and demand power. Regulators have already flagged antitrust concerns because such deals can tilt the tables for everyone else.
Capital intensity: Training frontier and near-frontier models, or even running inference at scale, now requires firm commitments on accelerators (e.g., GPUs) and cloud. When a platform can standardise chips, models and apps under one roof, think Google’s TPUs powering Gemini and Google products, the bar for independent players rises.
Evidence snapshot
- Nvidia & OpenAI: Proposed up-to-$100bn investment, 10GW+ planned capacity, with delivery from 2026, with antitrust scrutiny underway.
- Google: Ten years of TPU development now underpin Gemini and core Google apps, an integrated chip, model and app stack at a billion-user scale.
- Mistral: from €105m seed (2023) to €1.7bn Series C at €11.7bn (Sept 2025), plus a Microsoft partnership to distribute models on Azure, showing both the capital required and the power of distribution partnerships.
- Wrapper risk in apps: early “layer-on-top” AI tools saw slower growth and resets as model owners added native features and distribution, forcing strategy shifts.
These are signals, not outliers. They describe a market where vertical integration and distribution control increasingly decide winners.
The new VC playbook
1) Sourcing: where to hunt now
Your search lens should tighten to opportunities that either the platforms won’t or can’t subsume, or that benefit from the platforms’ gravity.
Go deep where platforms are thin. Two areas repeatedly produce durable companies:
- Vertical AI with proprietary data and workflows embedded: Regulated or complex domains (health, finance, industry, defence) where the value is not just a model but integration into the messy, permissioned workflow. Proprietary data rights, integrations, auditability and change management matter more than the latest benchmark.
- Enabling layers that platforms tolerate (or want): Optimisation, safety, verification, data tooling, latency/bandwidth solutions, evals and compliance, places where a platform prefers an ecosystem to flourish rather than rebuild in-house.
- Open-ecosystem hedges that help customers avoid one-stack lock-in. Whilst somewhat contrarian as an approach, when hyperscalers or foundation-model providers become de facto utilities, CIOs look for credible multi-cloud and multi-model strategies. Vendors that reduce switching costs and make performance/price transparent can ride that demand without picking a losing side.
What to avoid. Pure “thin wrappers” over someone else’s models and distribution. The risk is not only substitution, it’s that the platform can bundle your core feature at zero marginal price.
2) Underwriting & diligence: what to test
Underwrite the whole system: technical moat, compute dependency, distribution, unit economics and exit pathways. Keep the test list short and concrete.
Moat today vs tomorrow.
- Unique asset: Is there something hard to copy, data rights, workflow embed, approvals, hardware, a community, or a defensibility flywheel?
- Rate of erosion: If the platform adds your feature, how fast does it degrade? What must be true to retain users?
- Up-or-out triggers: Which milestones really increase bargaining power (e.g., exclusive data deals, regulatory clearances, throughput/latency breakthroughs)?
Compute plan.
- What is the bill of materials for training/inference (model size, tokens, context windows, refresh cadence)?
- Is there a second source (alt clouds/models, reservations, a strategic)?
- How do they react to a 2x move in token or GPU prices? If the answer is “we can’t,” forecasts are fragile.
Distribution and pricing.
- Clear wedge into a painful workflow.
- Identified loser who will fight back (incumbent vendor, internal team, the platform).
- Pricing that survives platform bundling and open-source price pressure.
Operational readiness.
- Data governance, auditability, evals, safety and change management in regulated environments.
- Sales motion that maps to buyer type (bottom-up product-led vs top-down enterprise), with proof the team can execute.
Mini-checklist (use in IC memos)
• Moat in one sentence and how it grows.
• Compute plan with a second source.
• Distribution wedge and proof.
• Pricing power beyond subsidised APIs.
• Two credible exit pathways.
3) Portfolio construction & capital strategy
Assume higher capital intensity and milestone risk. Adjust reserves and syndication upfront.
Stage barbell, not spray-and-pray
At pre-seed/seed, back teams with proprietary inputs (data, workflow control, approvals) or outputs (distribution, buyer access). At Series A–B, prefer companies that already show compute discipline (reservations, model-choice pragmatism) and distribution traction (landed reference customers, growing net retention).
Syndication and reserves
- Bigger, earlier reserves for winners with compute or data moats. Pre-commit to pro rata and super pro rata where milestones deepen the moat.
- Syndicate with investors who can unlock in-kind supply (cloud credits, GPUs, distribution) or regulatory pathways.
- Use venture debt sparingly for working capital behind recurring revenue with predictable inference costs; avoid debt for speculative training runs.
Milestones that matter
Prefer milestones that alter the company’s bargaining power rather than vanity metrics. Examples: exclusive data partnerships, verified cost-to-serve reductions, enterprise certifications, model-agnostic architecture, or a distribution agreement that creates a durable customer pipe.
4) Defensibility by layer: what actually moats in 2025–26
Defensibility differs between infrastructure, models, and applications. Treat them separately.
Infrastructure (chips, cloud, serving, data plumbing)
Moats come from scale, reliability, and switching friction. If you don’t control silicon, you need privileged access (supply agreements, specialised accelerators, or managed services that reduce tail latency and cost). Public signals show how tightly chips and models are now tied. Nvidia’s proposed OpenAI deal and CoreWeave’s multi-billion-dollar contracts with OpenAI illustrate the size and intimacy of these links. Infra start-ups can still win, but they must own a customer-critical bottleneck or reduce cost/latency meaningfully at scale.
Models
Frontier competition is capital-heavy. The viable paths for non-platform players:
- Domain-specialised models with data/feed loops incumbents can’t copy
- Efficiency advantages (smaller, faster, cheaper for a class of tasks)
- Distribution partnerships (e.g., Microsoft–Mistral) that convert quality into access
The funding required and speed of iteration are rising, demonstrated by Mistral’s rapid progression to a €11.7bn valuation with strategic backing.
Applications
Apps win via workflow ownership and embedded switching costs, not model novelty. Expect platforms to ship your best features; build things they don’t want to do (data wrangling, controls, messy integrations), or things they can’t (regulated operations, on-prem constraints, multi-vendor independence). Early “AI writing” tools provide a cautionary tale: once model owners tightened distribution and added native features, growth slowed and valuations reset.
5) Practical patterns to back and avoid
Patterns to back
- Data-advantaged verticals with hard integration: think claims processing with payer-side integrations, industrial maintenance with sensor/asset data rights, or underwriting using proprietary ground truth.
- Trust, safety and verification layers that make AI enterprise-ready (evals, red-teaming, policy enforcement, provenance, rights management).
- Latency and cost killers: inference-optimised serving, context compression, retrieval tooling where cost reductions are visible on the invoice.
- Multi-cloud, multi-model control planes that reduce lock-in and expose performance/price transparently for CIOs.
Patterns to avoid
- Feature wrappers with thin moats and a single-provider dependency.
- Benchmarks-as-moat stories. Those decay fast.
- Pure commoditised infra without privileged supply or distribution.
6) Working with the platforms (without being swallowed)
You can partner with giants and still keep leverage. Principles:
- Be useful to them, not threatening: Enablers thrive where platforms prefer an ecosystem.
- Negotiate for distribution, not only credits: Credits are table stakes; what you want is exposure (marketplaces, default toggles, reference architectures). OpenAI’s recent Apps SDK push shows how distribution channels change quickly; design your product to ride the wave but not drown if it shifts.
- Stay model- and cloud-portable: Treat portability work as capex in bargaining power.
- Instrument your dependency: Track cost-to-serve by customer, by use case, by model; surface unit economics that don’t rely on temporary subsidies.
7) Governance, safety and enterprise readiness
In regulated or high-risk domains, how you ship matters as much as what you ship.
- Data governance & rights: Clear data provenance, usage rights, and revocation paths.
- Evals and audits: Align with buyer expectations; third-party auditability is becoming standard practice for enterprise buyers as large providers call for more independent scrutiny.
- Change management: Enterprise adoption fails without training, policy updates and process redesign; budget and staff for this upfront.
- On-prem / sovereign needs: Plan for customers who cannot move sensitive data to public clouds; portability and deployment options widen your market.
8) Pricing and unit economics
Price for business value while tolerating a falling cost floor.
- Assume open-source and platform bundling will pressure list prices. Protect gross margin with
- workflow embed
- switching costs
- measurable cost/quality advantages.
- Quote and bill on customer-visible metrics (e.g., cost per claim processed, cost per resolution, latency SLA tiers) rather than abstract token counts.
- Nudge customers into predictable usage with tiered SLAs and commit-based discounts that match your own compute reservations.
9) Talent and operating cadence
This market rewards senior, pragmatic builders. Hire for systems thinking (infra), ML optimisation (models), and product managers who can translate messy workflows into software. Keep iteration loops short and measure what platforms can’t see (offline metrics, human-in-the-loop quality, task completion times). Your advantage is in the ugly corners of the workflow, not the shiny demo.
10) Exit planning in a consolidating market
Plan exit paths at investment time, not the round before IPO.
- Strategic M&A: Most likely for enablers that de-risk or accelerate a platform’s roadmap, or vertical apps that bring data/workflow assets the platform wants. Deals are easier when you already integrate well and your revenue is concentrated in the buyer’s ecosystem.
- IPO (initial public offering): More realistic for infra with scale, or vertical apps with high retention and clear unit economics. Public investors care about durable gross margin, net retention, dependence on a single platform, and capex visibility.
- Partial liquidity: In longer, capital-heavy journeys, secondaries and structured rounds can right-size risk. Use them to align horizons, not to mask weak fundamentals.
Risks & counters
Three plausible pushbacks:
- Antitrust and “circular” finance unwind. If regulators block or constrain megadeals (chips, models, cloud), more oxygen returns to the open market. That could widen the aperture for independent infra and models. For now, scrutiny is rising, but the deals are proceeding.
- Open-source step changes. If open models compress the cost/quality frontier faster than expected, commoditisation accelerates at the model layer. Moats will rely even more on data rights, workflow embed and distribution.
- Buyer fatigue. If enterprises stall AI rollouts due to governance, ROI or change-management friction, growth shifts towards tools that reduce risk and deliver hard savings, favouring enablers over frontier bets.
The Takeaway
AI’s centre of gravity has moved to a few vertically integrated stacks. As a result, value pools at the edges, chips and distribution, and leaks out of shallow apps. The right VC posture is focused and practical: hunt where platforms are thin, underwrite compute and distribution like supply-chain dependencies, and build portfolio strategies around real bargaining-power milestones. Plan exits early, keep leverage with portability and data rights, and measure value where it’s created, in the messy workflows platforms don’t want to own.