If you're catching up: Part 1 — Why Nadella just rewrote SaaS in one sentence and Part 2 — Tokens are the new ARR
Hey {{first_name|friend}} -
In Part 1, the case was that the seat is no longer where software revenue lives. The seat is the floor; consumption is the upside. Part 2 named the unit replacing ARR — the token — and the three questions to ask when a software CEO talks about AI revenue.
Today is the portfolio map. I cover the structural beneficiaries, the headwinds, and the picks and shovels that win no matter which side wins.
Applying the CapEx Payback Test
With combined Mag 7 CapEx on track to exceed $1T in 2026, the question that matters is what the spend is actually buying. CapEx — short for capital expenditures, the long-term infrastructure spending pouring into AI data centers and chips — is the largest line item in tech right now, and the Mag 7 (the seven largest US tech stocks: Apple, Microsoft, Alphabet, Amazon, Meta, Tesla, and Nvidia) account for the bulk of it.
To score whether the spend is actually turning into revenue, I use what I call the CapEx Payback Test. Three signals.
One — AI-attributed revenue growth. Is AI showing up in the top line, or just in the press release? Cloud growth re-accelerating, AI-product attach rates, named enterprise contracts — those are signal. Vague references to "AI tailwinds" on an earnings call are not.
Two — operating margin direction. Is the spending compressing margins, or are they holding? Massive CapEx with steady or expanding margins means the spend is producing high-value output. Massive CapEx with collapsing margins means the company is paying for a future it hasn't earned yet.
Three — contracted future revenue. Booked backlog — what companies report as their remaining performance obligation, or RPO (the dollar value of contracts signed but not yet delivered). It's the truest signal of real demand, because it's revenue customers have already legally committed to. Press releases are noise. Backlog is signal.
My personal bar: a name has to pass at least two of three before I take its AI revenue story at face value.
Parts 1 and 2 add a fourth.
Four — the consumption-to-seat ratio. For software businesses, is the consumption layer growing materially faster than the seat base? For hyperscalers, are tokens, agent deployments, and consumption-style revenue lines growing faster than seat-style cloud lines? For specialty software, is there a consumption layer at all?
The fourth signal does work the original three couldn't. RPO tells you about contracted demand — but a contract that's all seats has a different forward profile than a contract that's all consumption. Operating margin tells you about today's profitability — but a vendor whose margin is held up by seat licenses while their consumption layer runs at break-even is in a different position than a vendor with healthy margin on both. The fourth signal sharpens what the first three measure.
My personal bar moves: a name has to pass three of four for me to take its growth story at face value. Most of the market doesn't.
Bucket One: Platform Owners
The cleanest beneficiaries of the agentic shift are the companies that own the infrastructure on which it runs. Hyperscalers — the major cloud and AI infrastructure operators. Below: the Mag 7 names, scored now on all four signals.
Microsoft passes all four. RPO of $627 billion, up 99%. Operating margin holding. AI revenue showing up directly in seats (Copilot at 20 million paid) and in consumption (Foundry tokens, Copilot credit-consumptive offer, Agent 365). The consumption-to-seat ratio is exactly what you want to see — seats growing fast, but consumption growing faster off a smaller base. One additional tailwind: the recent OpenAI restructuring locked in royalty-free use of OpenAI IP through 2032 and a continued OpenAI-to-Microsoft revenue share through 2030, while eliminating the revenue share Microsoft had been paying out. That's a margin upgrade on every Copilot consumption dollar — and a structural reason Microsoft can run usage-based GitHub Copilot pricing without compressing margin the way a non-hyperscaler would.
Alphabet passes all four. Cloud backlog of $462 billion, nearly doubled quarter over quarter. Cloud margin expanded. Tokens-per-minute up 60% sequentially. The vertical integration — chip, model, data, distribution — gives them control over the unit economics on the per-token spread (the margin between what they charge per token and what it costs them to produce). The bear case on Search holds up worse the more agentic queries get monetized in AI Mode — Google's new AI-driven search experience. New this quarter: Alphabet is now physically selling TPUs into customer-owned data centers, with most revenue landing in 2027. That's competing with NVIDIA at the silicon level, not just at the cloud-rental level — every TPU shipped is a GPU not bought.
Amazon passes three. AWS backlog (contracts signed but not yet recognized as revenue) of $364 billion plus the $100 billion Anthropic deal sitting outside the headline number. Bedrock spend up 170% quarter over quarter. Tokens processed in Q1 exceeded all prior years combined. The asterisk is on operating margin — the e-commerce and retail drag dilutes what would otherwise be a clean pass on AWS alone.
Meta is a different test. CapEx is being raised again ($125-145 billion 2026 guidance) but the consumption layer isn't formalized — WhatsApp Business AI is growing 10x weekly conversations, but the monetization roadmap is "still developing" in their own words. The reason to own it is the consumer-agent thesis: Muse Spark (Meta Superintelligence Labs' new flagship model, succeeding Llama as Meta's primary foundation model family) as the foundation model, WhatsApp and Instagram as the distribution. The structural gap is the agent layer in the middle. Meta's planned Manus acquisition — the consumer-grade agent product they were going to layer on top — remains unresolved. Susan Li's only update on the call: "we're still working through the details, no update right now." Without Manus or an equivalent, Meta is missing the agent layer in the middle of its stack. Whether that gap closes through the original deal, an alternative acquisition, or in-house build is a key open question. Meta passes the spirit of the test, not the letter, and the Manus question is the specific risk to size around. A different bet than the enterprise-agent thesis driving Microsoft, Alphabet, and Amazon.
On the AWS thesis above: Meta committed to "tens of millions of Graviton CPU cores" from AWS this quarter as part of a $107 billion contractual commitments step-up. Even the most CapEx-aggressive consumer hyperscaler is renting AWS capacity to fill the gap — which is a ratifying data point for AWS that doesn't show up in AWS's own backlog number.
Apple and Tesla I'm setting aside from this scoring — Apple is a deliberately low-CapEx ballast; Tesla's AI bet is tied to Robotaxi and Optimus, neither producing revenue at scale yet. Both are different categories than what this series is about.
Bucket Two: Non-Hyperscaler Companies That May Benefit
Names outside the hyperscaler club whose business model is positioned for the consumption shift, not against it. The "Beyond SaaS" lens from Sunday's issue, applied with positions.
Adobe. Already moved Creative Cloud onto a seat-plus-consumption model with Firefly credits — Firefly is Adobe's generative AI tool, priced as metered consumption on top of the seat license. The lock-in is the workflow ecosystem (asset libraries, brand guidelines, agency review). The data moat is Firefly's training on commercially-safe Adobe Stock data — which is real for enterprise customers needing legal indemnification on AI-generated assets. Score: passes three of four.
Shopify. The cleanest non-hyperscaler beneficiary I see in the category. GMV-based pricing — GMV is gross merchandise value, the total dollar volume of transactions flowing through the platform — means revenue scales with merchant transaction volume. A consumption model in different vocabulary. Layer in proprietary purchase and customer data across millions of merchants and the operational reliability advantage versus the rest of the category, and Shopify is positioned to be a primary platform that agents transact through in agentic commerce. Score: passes three of four.
Cybersecurity with proprietary data. Of the names from Part 1 — CrowdStrike, Palo Alto Networks, Fortinet, SentinelOne — the ones with years of accumulated endpoint telemetry have a data moat that compounds. Threat models trained on proprietary attack data outperform models trained on anything else. And every agent is a new endpoint that needs identity, governance, monitoring, threat detection — meaning the per-endpoint TAM (total addressable market) expands rather than shrinks. Names without that data foundation get displaced by hyperscaler-native security stacks. Score the names individually on telemetry depth, not category membership.
Bucket Three: Companies Who Are Exposed
The exposed names. Sunday's issue covered the categories; here are the position implications.
Pure-seat enterprise SaaS without a credible consumption layer. The legacy SaaS names from earlier in the series — Workday, ServiceNow, Salesforce. Each has launched an AI add-on, but the AI revenue is small relative to the seat base. The position call is not "sell" — it's "downgrade the multiple in your own model." If you're holding these with growth-stock valuation expectations, the consumption ceiling is the risk to be aware of.
On Salesforce specifically: they launched Flex Credits — a per-action consumption pricing model — in May 2025, with hybrid per-user-plus-credit bundles and pure consumption tiers now available through 2026. That's the seat-plus-consumption pivot in motion. The open question is whether they execute fast enough to reset the multiple before the seat layer plateaus. Watch the AI revenue mix in subsequent quarters — Salesforce may belong in Bucket Two (or at least on the watchlist between buckets) rather than fully in this one.
Consulting and systems integrators on a billable-hour model. When a Copilot does in 30 seconds what a junior consultant did in 30 minutes, the hourly economics compress. The major names — Accenture, Cognizant, Infosys, Wipro — are pivoting to selling agentic AI implementation services, but that doesn't replace dollar-for-dollar what they used to bill. Position call: structural caution on the entire category.
The pattern is the same in both cases: not necessarily sell — but if you're holding any of these on a growth-stock thesis, that thesis needs to be re-examined against the consumption-vs-seat question.
Regulatory Risks to Note
The series so far hasn't engaged with regulatory risk, but one should consider three specific risks. US-China tech decoupling is structural — the Manus situation is one specific instance and probably not the last; agent-layer acquisitions involving Chinese AI assets are likely to face scrutiny in both directions. The EU AI Act and US executive orders on data sovereignty are real friction for agent deployment in regulated industries (healthcare, finance, government). Susan Li explicitly flagged "headwinds in the E.U. and the U.S. that could significantly impact our business and financial results" on Meta's call. None of these change the bull case directionally, but they widen the range of outcomes — particularly for any name whose growth narrative depends on global agent rollout in 2026 and 2027.
Second Order: The Picks & Shovels
The names that benefit regardless of which platform wins. These get less attention than the hyperscalers but earn their place in a portfolio specifically because the hyperscaler trade is crowded and they're not.
Memory. Buried in Meta's CapEx commentary this quarter: capital expenditures are increasing partly because of "higher component costs, particularly memory." Microsoft quantified the same dynamic — roughly $25 billion of the 2026 CapEx step-up is attributable to component pricing alone, with $5 billion of that hitting in a single quarter-over-quarter step. Google flagged supply as the gating factor on growth. Microsoft's commercial cloud is capacity-constrained through 2026. HBM (high-bandwidth memory, the specialized memory AI chips need to run efficiently) and DRAM (the standard computer memory used in everything from laptops to data centers) are structurally short. Names exposed: Micron (MU), SK Hynix, and Samsung's HBM business. (Note: Korean names can get exposure through EWY on US exchanges.)
Memory equipment. The shovels for the shovels. Wafer fab equipment — the manufacturing tools chip companies need to actually make memory and processors — whose customer base expands with memory CapEx. Lam Research, Applied Materials, KLA. And ASML, the Dutch company that builds the EUV (extreme ultraviolet) lithography machines that are the structural bottleneck under everything else in chip manufacturing. Exposure is more cyclical than the memory makers themselves, but the secular AI tailwind matters.
Custom silicon enablement. Every hyperscaler is designing its own chips. They're not fabbing them. TSMC remains the central fab, with advanced packaging — known as CoWoS, or chip-on-wafer-on-substrate, the technique that bonds AI processors to their high-bandwidth memory — as the structural bottleneck. Networking and interconnect names like Marvell, Astera Labs, and Broadcom benefit from the custom silicon flow.
Arm-based compute. Arm Holdings (ARM) is the architecture under most non-NVIDIA AI compute in the cloud. Amazon's Graviton — the CPU family Meta just committed to "tens of millions of cores" of this quarter — is Arm-based. As agentic workloads pull compute back toward CPUs (multi-step reasoning, code generation, real-time orchestration are all CPU-heavy in addition to GPU-heavy), Arm benefits at the architecture layer regardless of which hyperscaler captures the workload.
Optical interconnect. Coherent (COHR), Lumentum (LITE), Fabrinet (FN). The 800G and 1.6T transceivers — the optical components that convert electrical signals into pulses of light, letting AI chips inside a data center exchange massive amounts of data at speeds copper wires can't handle — are ramping into volume. A real long-term structural trend (what investors call "secular," meaning driven by long-term forces rather than the business cycle) that doesn't get the attention of the chip names but compounds as data centers scale. Direct beneficiaries of the same CapEx cycle, in a category that hasn't yet been front-page news.
Power and cooling. Data centers are bumping into power constraints first, real-estate constraints second. Vertiv, Eaton, and Schneider Electric are the picks-and-shovels for the build-out. Less crowded than the hyperscaler trade and tied to the same secular spending wave.
Inference silicon outside the hyperscalers. AMD's data-center revenue trajectory is tied to inference workloads — the recurring cost of running AI models in production, versus the one-time cost of training them — outside the custom-silicon hyperscaler stacks. The bull case isn't that AMD displaces Nvidia at the high end of training; it's that AMD captures share in the inference layer where the workload is more commoditized and price-sensitive.
The second-order layer is where work-optional portfolios often add the most value over time, because the names compound through the cycle without requiring you to be right about which agent OS wins.
Closing Words
The first issue claims that software is being repriced from seats to consumption. The companies that report in tokens, growing them faster than their seats, are pricing themselves into a different multiple. The companies still locked into the seat are slowly being repriced the other way. The companies whose business model never used seats in the first place — the consumption-native names, the picks-and-shovels — get to compound through the cycle without having to make the transition.
The work isn't to buy the right name today. It's to position the portfolio for the next several years of this shift. That's where work-optional gets built.
Two quick housekeeping items.
First, this Sunday’s issue will be a tips topic aimed for beginner investors instead of the usual analysis deep-dive. (I need a break 😆.) You can opt out of the tips email series by answering the poll below.
Second, I’m almost done with my first e-book. If you’re interested in getting on the launch list, please fill out this quick survey.
As always: I'm not telling you what to buy. I'm sharpening the lens you use to look.
Stay disciplined, Koh
Disclaimer: Nothing in this newsletter constitutes investment advice or a recommendation to buy or sell any security. Numbers and observations are as of publication. I may hold positions in companies discussed above. Always do your own research and consult a licensed financial advisor before making investment decisions.
