From Cloud to Concrete: Venture's Role in the Data Center Boom

From Cloud to Concrete: Venture's Role in the Data Center Boom
Catherine Lee
Freelance Content Writer

Behind the headlines about models and agents, an equally important story is unfolding in concrete, steel and power. Analysts now project that AI-related infrastructure spending could reach several trillion dollars by the end of the decade, with hyperscalers (the largest cloud providers that operate cloud infrastructure at hyperscale – like Google and Meta) already on track for hundreds of billions in annual capex. The International Energy Agency expects global investment in data centers to exceed new oil exploration spending in 2025.

In November, Anthropic announced a plan to build roughly $50 billion dollars’ worth of custom AI data centers with neocloud partner Fluidstack (neoclouds are GPU-first cloud providers built specifically for AI workloads, offering high-performance compute that traditional clouds were not originally designed to handle.) Meta outlined a multi-hundred-billion-dollar program to build AI-ready campuses across the United States. Google revealed Project Suncatcher, a research initiative exploring space-based TPU data centers powered by continuous solar radiation.

Much of the capital flowing into AI infrastructure today comes from infra, PE and sovereign wealth funds. At the top of the stack, data-center deals already resemble classic infrastructure: massive campuses, multi-decade power agreements and balance sheets measured in billions. Neocloud operators like CoreWeave, Crusoe, Nebius, Nscale and Lambda build AI-optimized campuses and sell GPU capacity to model labs, enterprises and hyperscalers. Crusoe raised over a billion dollars in their Series E at ~$10 billion valuation, and Nscale recently closed what was reported as the largest Series B in European history, topping a billion dollars. Lambda raised nearly half a billion in equity and layered on large credit facilities tied to GPU inventory.

For venture funds, the opportunity is to own the intelligent layers that these giant data-center projects rely on — the software that runs them, the hardware that makes higher density possible and the tools that help developers actually use all this new infrastructure.

  • The control layers: AI tools that act like the control room for a data center, making decisions about power use, cooling and which AI tasks run where (Phaidra, Emerald AI, NetBox Labs).

  • Enabling hardware: Better ways to cool hot AI chips using liquid, smarter power hardware that delivers electricity more efficiently and next-generation memory that helps data centers pack more compute into each rack (Corintis, Accelsius, Skycore, FMC).

  • Developer abstractions: Tools that let AI teams use neoclouds without needing heavy infrastructure expertise — dashboards, APIs and workflow layers that make these GPU platforms feel easy to build on. This area is emerging and sits outside the scope of this piece.

Climate-focused ventures funds in particular have been framing this space as a part of their energy and infrastructure theses, looking for software-defined efficiency layers and grid-integrated approaches that reduce the energy intensity of AI workloads. This space is a natural fit for firms that understand both energy economics and early-stage software.

Closer look at the software and control layers

This is the cleanest “venture-shaped” bucket: software and AI agents that manage cooling, power and workloads inside large AI campuses. The buyers may be hyperscalers and neoclouds, but the business model looks closer to industrial SaaS.

Phaidra builds AI agents that control industrial processes, with a growing focus on data centers. It recently raised a Series B of more than $50 million dollars backed by both traditional venture funds and strategics. Its thesis is simple: as data centers become AI factories, their operations should be controlled by AI as well.

Emerald AI treats data centers as flexible grid assets. Its platform can shift or pause workloads so that campuses reduce power consumption during peak grid events without violating service obligations. Its seed and extension rounds brought total funding to more than $40 million dollars, with backing from climate investors, strategic utilities and AI-focused venture funds.

Tibo Energy is applying AI-driven energy management systems to commercial and industrial sites, including edge data centers. It raised a 6 million euro seed round and is deployed across dozens of industrial facilities in Europe.

This category is prime territory for Seed through Series B. Rounds fall in the tens of millions, ownership can remain meaningful and most of the capital intensity sits on the customer’s balance sheet rather than the startup’s.

Closer look at network and infrastructure control planes

A second software layer is the “source of truth” for physical and logical infrastructure inside AI campuses.

NetBox Labs for example commercializes the widely used open-source NetBox project, which functions as a system of record for network and infrastructure topology. Customers range from traditional enterprises to AI infrastructure operators. NetBox Labs raised a $35 million dollar Series B with participation from top enterprise and infrastructure investors. For VCs, this is a classic control-plane bet: if a system becomes the default operating system for AI campuses, it can compound for a decade or more.

Earlier stage companies in this space target narrower control problems than NetBox, such as GPU scheduling, multi-cluster networking or observability for high-density racks.

Closer look at power electronics and semis for AI data centers

Power systems inside data-center racks are getting a major upgrade as AI drives up both power use and efficiency requirements. This is a technical, slow-moving part of the stack, but the momentum from AI infrastructure is pulling change.

Skycore Semiconductors, based in Denmark, is building high-voltage DC power integrated circuits for next-generation AI data centers, as the industry shifts away from legacy 54V power systems towards 800V High Voltage Direct Current (HVDC) architectures which can handle much higher power demands more efficiently. It recently raised a 5 million euro seed round from deeptech investors, led by Amadeus APEX Technology Fund.

Ferroelectric Memory Company is a later-stage example, having raised about 100 million euros to advance energy-efficient non-volatile memory for high-density AI servers.

In this category, seed and series A are still funding the unproven technologies, while large growth rounds are being financed once reliability and design wins become clear.

Closer look at cooling and thermal infrastructure

Cooling is transitioning from incremental improvements to fundamental architectural shifts. Every GPU is essentially a miniature furnace. As models grow and racks require several kilowatts per chip, air cooling alone becomes insufficient. Liquid and two-phase systems remove heat more efficiently and closer to the source, which allows more compute in the same footprint and reduces power used on cooling.

Corintis raised a $24 million series A for microfluidic cooling technology that routes liquid through fine channels near the chip. Testing suggests it can remove heat significantly more efficiently than traditional methods, enabling higher-density racks.

Accelsius, based in Austin, is commercializing a two-phase direct-to-chip cooling system that can deliver roughly a tenfold increase in rack density while cutting cooling energy roughly in half, with no water required. It also raised a $24 million series A.

Most earlier-stage startups here are either lab spin-outs in stealth or narrow materials companies working directly with large cloud providers and OEMs. Hyperscalers are the primary buyers driving this shift, and achieving design-partner status with those key buyers is critical.

Risks and challenges for VCs

Industrial sales dynamics: Even asset-light layers like Phaidra ultimately sell into multi-gigawatt industrial projects with long integration cycles. That means their revenue looks less like classic SaaS and more like industrial software, where sales take much longer to close, deployments require deep testing with physical systems, and revenue shows up in large but uneven chunks tied to project milestones rather than smooth monthly subscriptions. Growth can still be strong, but it ramps later because customers only scale after several successful integrations.

Technology and architectural risk
Cooling, power and control products are often tuned to specific GPU generations or rack designs. If a hyperscaler standardizes on a new architecture, entire product categories can become obsolete overnight. The pace at which leaders like Nvidia and Microsoft push new designs adds real volatility.

Customer concentration
The buyer universe is small: a few hyperscalers, a short list of neocloud operators and several large utilities. Design-partner access is valuable, but if a single hyperscaler insources the functionality or chooses a competing standard, a startup’s realistic TAM can collapse quickly.

Regulatory, ESG and geopolitical pressure
Data centers are increasingly treated as critical infrastructure. National-security rules, data-sovereignty policies and foreign-ownership restrictions shape siting, partners and exit options. At the local level, grid stress, land-use fights and water scarcity can delay deployments, which directly impacts startups whose value propositions assume rapid rollout.

Exit dynamics
AI infrastructure spending may reach trillions, but returns can take years to materialize. First movers in capital-intensive cycles do not always capture the best equity outcomes. For VCs, this likely means longer holding periods, more exits to strategic acquirers and a premium on syndicates that understand deep infrastructure risk.

Closing thoughts

The bar is high for founders and early-stage investors betting on the intersection of AI infrastructure, energy and software: winning teams will need a rare combination of deep technical credibility, a clear understanding of grid and regulatory realities, the channel relationships to sell quickly, and a strategy to dodge avoid being crushed or absorbed by the giants they sell into.