The AI 5

Jensen projects $1T and the stock closes flat; NVIDIA's $20B Groq bet; the free software play that locks in hardware; Pentagon sole-sources its AI stack; cooling startup mints unicorn on Jensen's advice

GTC 2026 edition.

By Mallika Iyer

Listen to today’s edition

1

Jensen Huang projected $1 trillion in chip revenue. The stock closed flat.

NVDA intraday price chart on GTC keynote day — rallied to $188.88 then round-tripped to close at $183.22

What happened

Jensen Huang doubled NVIDIA's demand forecast to $1 trillion through 2027, up from $500 billion at last year's GTC. NVDA hit $188.88 intraday then round-tripped to close at $183.22 — up 1.65% on what was supposed to be the biggest number in semiconductor history.

Why the stock didn't move

The market didn't reject the forecast. It had already absorbed a version of it. Rubin was announced in January, hyperscaler support was already positioned, and annual cadence was already priced. At $4.45 trillion market cap, NVIDIA trades on changes in probability, not changes in TAM slides. "Visibility through 2027" isn't contracted revenue — it's a management assertion with execution risk still attached.

What would actually break the bull case

  • Inference economics flip the hardware mix. NVIDIA owns ~80% of training, but inference is becoming the majority of compute spend — and it rewards different architectures. Groq's LPU delivered sub-300ms latency on LLaMA at a fraction of GPU cost before NVIDIA acquired them. If LPU-style architectures prove fundamentally better for inference at scale, NVIDIA bought a band-aid, not a cure. The acquired tech still has to be integrated into a GPU-centric stack that wasn't designed for it.
  • Google breaks custom silicon wide open. Google is the only hyperscaler that builds its own training and inference chips, uses them at massive internal scale, and sells access externally through GCP. TPU v6 is real competition. If Google's Cloud TPU economics reach a point where mid-market companies get hyperscaler-grade custom silicon via API, NVIDIA's TAM shrinks from "all AI compute" to "the companies that can't or won't use Google."
  • Models get efficient faster than demand grows. Distillation, quantization, and MoE architectures are compressing the compute needed per inference call. If cost-per-token drops 10x every 18 months — and it's on that trajectory — NVIDIA needs raw demand to outrun efficiency gains. That's a race, not a foregone conclusion.

The one signal worth trusting

Micron confirmed HBM4 has entered high-volume production for Vera Rubin systems. That's an external supply-chain datapoint, not an NVIDIA slide. It separates roadmap theater from actual manufacturing readiness — and it's the closest thing to a hard confirmation that H2 2026 Rubin availability is real.

Bottom line

The bull case left GTC intact. But intact is not the same as expanded, and at this valuation, intact doesn't move the stock. The real question isn't whether NVIDIA hits $1 trillion in revenue. It's whether NVIDIA is still the dominant architecture when inference, not training, is the primary workload — and the market gave you its answer by closing flat.

2

NVIDIA paid $20 billion for a company most investors have never heard of.

NVIDIA training vs inference market position — 80% training share but contested inference market

NVIDIA licensed Groq's inference technology for roughly $20 billion in late 2025 — not an acqui-hire, but a premium paid to close a capability gap it couldn't afford to leave open. NVIDIA commands 80% of the AI training chip market, but training is a one-time capex event. Inference — running models in production 24/7 — is recurring revenue and the bigger market long-term, contested by Google's TPUs, Amazon's Trainium, and a wave of custom ASICs. At GTC, NVIDIA announced a dedicated inference-optimized chip alongside the Groq integration. The $20B price tag is the tell: NVIDIA doesn't overpay for things it thinks it can build internally. Watch Q1 FY27 earnings (May 20) for the first integration details.

Free account required

3 more stories in today's edition

Sign up for a free account to read the full daily digest. No credit card, no spam — just the signal that matters.