Winvesta Crisps

Winvesta Crisps

Nvidia (NVDA): The pick-and-shovel play powering AI’s industrial revolution

Denila Lobo's avatar
Denila Lobo
Apr 30, 2026
∙ Paid

Jensen Huang’s recent remarks underscore a broader Nvidia thesis: the AI era is moving beyond text and images into systems that can perceive and act in the physical world. Nvidia is positioning itself not just as a chip supplier, but as an infrastructure company spanning compute, networking, software, and simulation. This article looks at how Nvidia’s business has transformed into a diversified AI platform, why the robotics wave represents a structural tailwind, what risks could derail the thesis, and whether owning the stock still makes sense at current valuations.


🔔 Don’t miss out!

Add winvestacrisps@substack.com to your email list so our updates never land in spam.



🧩 Business model 3.0: Chips, systems, and software stacks

Nvidia’s revenues now come from four main engines rather than just selling GPUs to gamers and data centres.

Data centre: This is the beast. In fiscal Q1 2026, Data Centre revenue hit $39.1 billion, up 73% year-on-year, representing roughly 88% of total company revenue of $44.1 billion. Nvidia doesn’t just sell H100 or Blackwell chips—it sells entire systems (DGX pods), networking (InfiniBand via Mellanox), and increasingly, cloud instances through partnerships. Hyperscalers like Microsoft, Google, and Amazon are buying at scale, but so are sovereign AI projects, enterprises, and AI-native startups. This segment is the money-printing machine.

Gaming: Once Nvidia’s core, Gaming now accounts for a much smaller share of revenue, generating $3.8 billion in Q1 FY2026. Growth here is modest—low single digits—but stable. The RTX 40-series enjoys strong ASPs, and the upcoming RTX 50-series could give it a refresh. Gaming acts as a reliable base load and keeps Nvidia’s brand visible to millions of consumers.

Professional visualisation and automotive: These smaller segments are strategically vital. ProViz serves designers, architects, and content creators with Quadro and RTX workstations. Automotive is where robotics starts to appear. Nvidia’s Drive platform powers autonomous vehicle compute, and its Omniverse software simulates factories and robot behaviour.

Networking: Mellanox’s InfiniBand and Ethernet switches are now critical connective tissue in AI clusters. As models scale, communication bandwidth between GPUs becomes the bottleneck—making networking an increasingly important part of Nvidia’s overall platform.

In other words, “Nvidia” is now an end-to-end AI infrastructure company, not just a chip designer.


🌪️ The robotics wave as a tailwind: From labs to loading docks

Huang’s robotics thesis isn’t speculative. It’s grounded in a simple reality: AI models have conquered language and vision, and the next frontier is action in the physical world. Nvidia is positioned to supply the compute, simulation, and inference engines that make humanoid robots, autonomous vehicles, and smart factories possible.

Embodied AI demand: Companies like Tesla, Figure AI, Boston Dynamics (now under Hyundai), and dozens of Chinese startups are building humanoid robots. Each one requires real-time inference, sensor fusion, and motor control—compute-intensive tasks that need Nvidia’s Jetson or Orin chips. Nvidia appears to have a strong lead in this stack, with few rivals offering comparable breadth across hardware, software, and simulation.

Omniverse as the training ground: Nvidia’s Omniverse platform lets companies simulate factories, train robots in virtual environments, and test edge cases without breaking real hardware. It is a strategically important part of Nvidia’s stack for enterprise and industrial use cases, with the potential to generate recurring subscription and cloud workload revenue as adoption scales.

Inference at the edge: Training AI happens in data centres. Inference—running the trained model—happens everywhere: in cars, robots, drones, and smart cameras. Nvidia’s Jetson Orin and upcoming Thor chips dominate edge AI, especially where power efficiency and real-time performance matter. As robots move from R&D to deployment, inference chip volumes will explode.

For a firm that monetises compute at every layer, robotics means recurring hardware sales, software subscriptions, and cloud inference revenue—all at once.

Keep reading with a 7-day free trial

Subscribe to Winvesta Crisps to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Winvesta India Technologies Ltd. · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture