Open Market Wire

While Everyone Is Watching the AI Race, Smart Money Is Collecting the Toll


The AI buildout is real. The chip story is real. But the most durable investment opportunity isn't in the chips or the software — it's in the physical infrastructure every data center, every AI company, and every cloud provider must pay for, in unlimited quantities, for the next decade.

There are four categories of constraint the entire AI economy runs through. The companies inside these constraints have pricing power most equity analysts haven't modeled yet.

Grant Calloway calls it the Invisible Toll Booth. Every car on the highway pays. It doesn't matter who wins the race.


Let me show you something about the AI economy that most investors are missing.

You know AI is real. You've been watching the data center announcements, the hyperscaler capital expenditure numbers, the Nvidia earnings calls. You probably own something in the space, or you've been watching it move without you.

Here's what nobody in mainstream financial media is saying clearly:

Every data center that gets built needs electricity. More electricity than your mind can easily picture.

A single modern AI training cluster — the kind used to train a large language model — consumes as much electricity as a small American city. Running continuously. All day, every day.

The four largest cloud providers — Microsoft, Amazon, Google, and Meta — announced a combined $400+ billion in infrastructure capital spending for this year and next. That number is not a projection. It is a commitment. It is already in motion.

All of that capex has to become physical reality. Metal, wire, cooling, electricity. Hardware that takes years to design, permit, manufacture, and install.

And here is where the investment story changes entirely.

The AI economy runs through a set of physical choke points — infrastructure categories that have no software substitute, no faster manufacturing workaround, and no import alternative that scales quickly enough to meet demand. These choke points are the Invisible Toll Booth.

The toll booth doesn't compete with any AI company. It doesn't care which model wins. It doesn't care whether OpenAI or Anthropic or Google is the last one standing. Every one of them has to pay the toll. Every data center that gets built this year, and next year, and the year after that, writes the same check to the same small set of infrastructure suppliers.

That's not a metaphor. It's a structural constraint that has no software analog and no financial workaround.

Here's the one fact that puts the whole picture in focus.

Lead times for large power transformers — the equipment that steps electricity from transmission voltage down to usable current for a data center — are currently running two to three years. In some categories, longer.

Two to three years. For equipment that a data center cannot operate without.

This is not a supply chain problem that gets solved in the next earnings quarter. It is a physical manufacturing constraint built on decades of underinvestment in domestic production capacity, now meeting the fastest ramp in electricity demand the United States has seen in a generation. The companies that make these transformers are not scrambling to catch up. They are selling everything they can produce, years in advance, and watching their order books extend while their pricing power expands.

This is what a toll booth looks like.

Most investors are watching the race. The AI software companies, the chip companies, the cloud providers — they dominate the financial headlines, the analyst coverage, the portfolio discussions. All of that attention is focused on who will win.

The toll booth doesn't need to win anything. It needs to collect.


The Race vs. The Toll Booth


Here is the problem with how most investors are positioned in AI right now.

The financial world has organized itself around a single question: who wins the race?

OpenAI or Anthropic? Google or Microsoft? Nvidia or whoever comes next? Every analyst report, every CNBC segment, every portfolio discussion in every wealth management office in America is structured around some version of that question.

That question is not wrong. It's just answering the wrong problem.

Because here's what the race requires, regardless of who wins it.

The four largest cloud providers — Microsoft, Amazon, Google, and Meta — collectively spent more than $350 billion in capital expenditures in 2025 alone. Not projected. Spent. Already on the books — and 2026 guidance implies more.

What does $350 billion in AI infrastructure actually buy?

It buys data centers. Each data center needs a site, a building, power delivery infrastructure, cooling systems, and network connectivity. Each one of those things requires physical materials and physical manufacturing capacity. None of them can be downloaded. None of them have a software substitute.

The bill of materials for a modern hyperscale AI data center runs to hundreds of millions of dollars per facility — and the largest campuses now under construction are routinely reported at $1 billion or more. That annual capex figure implies hundreds of such facilities being designed, permitted, and built per year, every year for the foreseeable future.

Every one of them writes the same set of checks.

Now compare two market capitalizations.

Nvidia — the company most closely associated with the AI buildout — trades at a market cap of $4.3 trillion. Its valuation reflects, at minimum, the current AI boom and significant future pricing power in the GPU market. The market has found the Nvidia story. It is priced.

Eaton Corporation — the power management company whose transformers, switchgear, and electrical distribution equipment every one of those data centers must install before a single GPU can run — trades at a market cap of $140 billion. It is covered by industrial sector analysts. Its AI infrastructure exposure is not a line item in most AI portfolio models. It does not appear in AI-themed ETFs.

Thirty times smaller. For the company that sells the equipment without which the $4.3 trillion company's customers cannot operate.


You can argue about Nvidia's valuation. You can argue about which hyperscaler's model strategy is most defensible. You can argue about whether the AI boom sustains or corrects.

You cannot argue that the electricity needs to get there.

Eaton doesn't need AI to win. It needs AI to keep building. And the building is already committed, funded, and underway.

That is the re-rating gap. The market has priced the race. It has not yet priced the toll.

See this month's top-ranked names →

The Four Toll Booths


The Invisible Toll Booth is not one constraint. It is four.

Each lane operates in a different market segment, is tracked by different analysts, and trades on a different set of metrics. But all four place the same economic demand on any company trying to build AI compute infrastructure: pay the toll, or the data center does not run.

Here is each lane.


Lane 1 — Grid Hardware


There are thousands of pounds of steel, copper, and insulating oil in a large power transformer. That transformer is the reason a data center can receive electricity at all — the equipment that steps high-voltage transmission current down to the levels a building can actually use.

You cannot order one overnight. You cannot order one this year.

Lead times for large power transformers have extended substantially as data center construction demand has collided with a domestic manufacturing base that was never sized for this level of throughput. A Wood Mackenzie Q2 2025 industry survey found power transformers averaging 128 weeks from order to delivery — roughly two and a half years — with generator step-up units running 144 weeks. Two to three years — and in some categories, longer — is the current wait from order to delivery for the equipment every data center must have before it can connect to the grid.

This constraint was not created by AI. It existed before the current buildout cycle began. Grid infrastructure was underinvested for decades, and the transformers serving much of the existing U.S. grid are aging well past design life. Demand to replace aging stock and demand to serve new data center capacity are competing for the same limited manufacturing output, simultaneously.

The companies that manufacture this equipment — and the companies that design and install the broader electrical distribution systems data centers require — are operating in a seller's market with visibility stretching years into their order books. Eaton Corporation — the largest power management supplier to the U.S. data center market — reported 29% year-over-year backlog growth in its Electrical segment in Q4 2025, with "data center momentum" cited explicitly as the primary driver. This is not typical industrial market behavior. This is constrained critical infrastructure serving an unconstrained demand curve.


What this means for the investor

The order backlog is auditable. The margin progression is visible in quarterly filings. Contract pricing is moving in one direction. This is the toll booth at its most literal: you cannot connect a data center to the grid without this equipment, and the companies that make it know it.



Lane 2 — Power Capacity


Data centers do not just need transformer equipment. They need the electricity in the first place.

The scale of demand has transformed utility contracting. Hyperscalers are signing 10- to 20-year power purchase agreements. They are acquiring generation assets directly. Some have funded the restart of shuttered nuclear plants to secure reliable baseload supply at the volumes they need.

The evidence of the constraint is in the grid interconnection queues. As of the end of 2024, more than 1,400 gigawatts of new generation capacity were actively seeking grid connection in the United States — comparable to the entire current installed generating capacity of the country. Most of this will never reach commercial operation. The median time from interconnection request to commercial operation has more than doubled since 2000, now exceeding four years for projects that do get approved.


Electricity cannot be imported. Power generation takes years to permit, finance, and construct. The pipeline of new AI demand is running faster than the pipeline of new supply.


What this means for the investor

The utilities and independent power producers on the right side of long-term data center contracts hold something the AI economy cannot do without — contracted electricity delivery at scale. Their earnings visibility and their pricing power have been fundamentally repriced by the same demand wave driving hyperscaler capex. Many are still priced like the companies they were five years ago.



Lane 3 — Copper


Every electron that powers a data center travels on copper. Not fiber, not silicon — copper. The wire, the busbars, the conductors, the cabling in every transformer, every cooling unit, every rack installation. The entire electrical infrastructure of the AI buildout is built on a metal whose supply chain moves in decades, not quarters.

AI infrastructure demand is landing on a copper supply base that was already strained before the current buildout began. Energy transition investment — solar panels, wind turbines, EV charging networks — had already created a structural demand increase that mine supply was not positioned to absorb quickly. AI demand is layering on top of that.

New copper deposits take a decade or more from discovery to first production. The sequence — exploration, permitting, financing, construction — cannot be compressed by capital alone. The mines that would meaningfully increase supply five years from now either already exist or are in early permitting. Major mining companies and analysts broadly anticipate a structural copper supply gap emerging later this decade — a deficit that no amount of capital can solve quickly once the demand lands.


What this means for the investor

This is the hardest constraint on the list to short-circuit. No software substitute exists for copper. A better algorithm does not reduce the pounds of copper in a data center installation. The demand is growing; the supply side cannot respond in the near term. The publicly traded companies with the lowest-cost production, the most defensible reserve positions, and the cleanest balance sheets are positioned to benefit from a deficit that no competitor and no amount of capital can solve quickly.



Lane 4 — THERMAL MANAGEMENT


Heat is the enemy of compute density.

Modern AI accelerators generate heat at rack densities that air cooling cannot physically handle. The volumes of air movement required to cool a fully loaded high-density GPU rack would be impossible in a dense deployment environment. The physics do not allow it.

The industry's response is liquid cooling — systems that circulate cooling fluid directly to heat-generating components — and the transition is no longer optional for serious AI compute buildouts. Liquid cooling is now a standard engineering requirement for large-scale AI infrastructure, not an upgrade.


The systems are custom-configured, installation-intensive, and manufactured by a small set of specialized suppliers. Demand has arrived faster than production capacity in this segment, exactly as it did in the grid hardware lane. Lead times are extending. Pricing is following.


What this means for the investor

This is the newest and least-followed of the four lanes — which means it is also the one where the repricing cycle is earliest. The mainstream data center industry has recently completed its transition from air to liquid cooling at scale. The companies building and delivering that infrastructure are early in a multi-year opportunity that institutional coverage is still discovering.



Four lanes. Four structural constraints. Every data center under construction at this moment is writing checks to all four simultaneously.

The AI economy built the race. The Invisible Toll Booth collects the admission.

See this month's top-ranked names →


Why This Isn’t Already Priced In


The most common objection to this thesis is that it is already known.

"Everyone knows about the transformer shortage." "Infrastructure is a consensus trade now." "You're late."

Here is the truth: the physical constraint is becoming known. The investment thesis is not yet priced.

There is a meaningful difference between the two.

Nvidia — the obvious AI infrastructure winner, the most-analyzed technology company currently trading — has a market capitalization of $4.3 trillion and trades at approximately 20 times annual revenue. That is what "fully priced" looks like. Hundreds of thousands of analysts, portfolio managers, retail investors, and quantitative models track every earnings call. The moment the AI story became clear, capital came. The valuation reflects every incremental piece of good news. The market found the race long ago.

Now look at the infrastructure category.

The transformer manufacturers, copper producers, grid infrastructure specialists, and thermal management companies sitting in the direct path of that same AI spending wave trade at industrial multiples. Utility multiples. Commodity multiples. They trade where they trade because the analysts who cover them are energy analysts. Industrial analysts. Materials analysts. Not AI analysts.

AI analyst reports do not include transformer manufacturers. Not yet.

That is the gap between "becoming known" and "priced."

Institutional capital allocation is driven by analyst coverage. When an AI analyst publishes a buy on a grid hardware company for the first time, capital from AI-focused allocations flows into that name — capital that had never found it before, because it was not in the research universe. When a commodity analyst and a utilities analyst and an AI analyst are all writing about the same company for the same reason simultaneously, coverage converges. The stock reprices.

This is a process, not a moment. We can watch it advance in real time through coverage initiation patterns, relative valuation gaps, and the rate at which the physical infrastructure story crosses from specialist trade publications into mainstream financial research.


"When the same story starts showing up in AI analyst reports and commodity analyst reports and utility analyst reports at the same time, that is when the repricing happens at scale. We are roughly two-thirds of the way there."


The window is closing. It is not yet closed.


How We Find the Best Toll Booth — The Scarcity Trade Scorecard

Understanding the toll booth thesis is the first move. The harder, more valuable move is identifying which specific company, in which specific lane, offers the best combination of fundamental position, current setup, and room for re-rating.

That question is what the Scarcity Trade Scorecard was built to answer.

The four-lane thesis makes every name in these categories look interesting at first glance. Transformer manufacturer? Toll booth. Copper producer? Toll booth. Liquid cooling supplier? Toll booth. But within each lane, execution quality varies significantly. Valuation setups vary. The rate at which the market is discovering each specific name varies. Treating all toll booth names as equivalent is how you end up owning the right thesis in the wrong stock.

We evaluate every name on four factors:

Current-event sensitivity (0–3)
Is there a direct connection between what is happening now — contract wins, backlog expansions, guidance revisions — and this company's near-term earnings power? The thesis should be active, not just eventually correct.

Pricing power under constraint (0–3)
Can this company protect or expand its margins as the supply constraint tightens? A transformer manufacturer with a multi-year backlog and rising contract prices scores differently than one with undifferentiated commodity exposure. This is the dimension that determines whether a good story compounds into a strong return.

Execution visibility (0–2)
Can we track delivery through the company's own filings? Infrastructure companies routinely publish backlog data, contracted revenue, and order book metrics that give forward visibility you cannot find in most technology stocks. We want to see the evidence in writing, not take a bet on management intentions.

Valuation setup (0–2)
Is the market opportunity still in front of the stock? A company where the re-rating has already happened — where the stock has doubled on the thesis everyone now knows — scores differently than a name where the fundamental position is strong and the valuation still reflects the old category.

The company with the highest total score represents the strongest current setup across all four factors. Every issue explains the score, not just the number. A score without reasoning is not research.

The model is intentionally transparent. The goal is for you to understand it well enough to apply it yourself when you encounter a name we haven't covered yet. That is the test of a useful framework.

This is the research model at the core of Open Market Wire — and you can start following it with an accessible subscription.


About Grant Calloway


I spent two decades inside institutional finance — portfolio management, research, capital markets. I know how the machine works because I ran parts of it.

What I also know: the machine is not designed to find the best ideas. It's designed to manage the firm's interests. Those are different objectives. They produce different research.

Open Market Wire is independently published — no investment banking relationships, no paid endorsements of stocks, no house view. The analysis that ends up here is the analysis that institutional research has no incentive to produce.