|
|||||
|
|
Artificial intelligence (AI) is, by one estimate, a $15.7 trillion global addressable market by 2030 -- and Nvidia is leading the charge.
True to form, Nvidia blew past Wall Street's consensus third-quarter sales and profit expectations.
However, the longevity of prior-generation graphics processing units (GPUs) poses a serious challenge to Nvidia's future pricing power.
Roughly 30 years ago, the advent and mainstream proliferation of the internet began changing the corporate landscape. Although it took years for this technology to mature and for businesses to fully harness it to maximize their sales and profits, the internet has had a profoundly positive impact on the growth trajectory of corporate America.
Investors have been waiting decades for Wall Street's next internet moment -- and artificial intelligence (AI) has answered the call.
Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »
The prospect of enabling software and systems with the tools to make split-second decisions without the need for human oversight is a potential game changer in most industries around the globe. This is a significant reason why PwC analysts have estimated that AI will contribute a staggering $15.7 trillion to global gross domestic product (GDP) by 2030.
Although investors can expect a long list of winners with this multitrillion-dollar opportunity, there's little doubt that graphics processing unit (GPU) maker Nvidia (NASDAQ: NVDA) has been the leading beneficiary of the rise of AI. It's grown from a $360 billion tech company at the start of 2023 to Wall Street's largest publicly traded company -- and the first to (briefly) reach the $5 trillion plateau.
Image source: Nvidia.
Last week, Nvidia's fiscal third-quarter operating results (its fiscal year ends in late January) highlighted the benefits of its first-mover advantage. However, one of the company's biggest flexes may have also exposed a serious future growth weakness.
Nvidia blowing past Wall Street's consensus sales and profit expectations is bordering right up there with death and taxes in life's certainties. The company delivered $57 billion in sales, representing 62% sales growth from the prior-year period, with generally accepted accounting principles (GAAP) net income of $31.9 billion. The latter is up 21% from the sequential quarter and 65% from the comparable quarter in the previous year.
These figures shouldn't come as a surprise to those who've been tracking Nvidia's innovations and dealmaking. Its multiple generations of GPUs, including Hopper, Blackwell, and Blackwell Ultra, are the undisputed preferred option in AI-accelerated data centers. Said CEO Jensen Huang:
Blackwell sales are off the charts, and cloud GPUs are sold out. Compute demand keeps accelerating across training and inference -- each growing exponentially.
Furthermore, these chips have proven superior to all external competitors in compute capabilities for enterprise data centers. This first-mover advantage, coupled with ongoing AI-GPU scarcity, has translated into phenomenal pricing power and a GAAP gross margin that's been lifted well above 70%.
Nvidia's CUDA software platform has also been pivotal to the success of its hardware. CUDA is effectively the toolkit developers use to maximize the compute capabilities of their Nvidia GPUs when building and training large language models, running high-frequency trading algorithms, or overseeing scientific simulations, among other tasks. This software has anchored buyers to the Nvidia brand and kept them within its product and service ecosystem.
Artificial intelligence is exhibiting all the hallmarks of a truly game-changing technology. But this doesn't mean the parabolic ascent of Nvidia's stock, or its jaw-dropping sales growth, is sustainable.
Similar to Nvidia's earnings press release, its quarterly conference call with analysts was packed with optimism and data points indicative of continued double-digit sales growth. However, one flex, courtesy of Chief Financial Officer (CFO) Colette Kress, may have unearthed a massive risk to her company's long-term growth prospects.
While delivering remarks prior to the executive team fielding questions from analysts, Kress said the following:
Most accelerators without CUDA and Nvidia's time-tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today, powered by [a] vastly improved software stack.
On one hand, this really emphasizes the high-margin value CUDA brings to the table. While the focus seems to be on the compute potential of Nvidia's hardware and Jensen Huang's aggressive innovation timeline that brings a new AI-GPU to market annually, CUDA might be the unsung hero for Nvidia.

Image source: Getty Images.
On the other hand, Kress's statement reveals a potentially significant issue for Nvidia. If the company's Ampere (A100) chips from six years ago can be supported by software improvements via CUDA, what incentive do existing clients have to upgrade their AI-data center hardware after five or six years?
Huang's amped-up innovation timeline, which is expected to bring the Vera Rubin and Vera Rubin Ultra chips to market by the latter half of 2026 and 2027, respectively, counts on AI hardware demand to remain robust. But if Nvidia's prior-generation GPUs offer utility well beyond their initial expectations, it would make sense for most businesses to delay their upgrade cycles. This would prove disastrous for Nvidia's pricing power on advanced AI chips and weaken its GAAP gross margin.
Additionally, the price of prior-generation GPUs continues to deteriorate, even if utilization remains robust. Kynikos Associates founder and noted short-seller Jim Chanos pointed out in a post on X (formerly Twitter) last week that the Hopper (H100) GPU Rental Index has declined by 30% in the 15 months since its inception. The Hopper was the next-generation chip introduced after Ampere.
If the price of Hopper and Ampere continues to decline following the release of next-generation AI-GPUs from Nvidia, it'll offer even more incentive for businesses to hang onto their existing hardware. In other words, the effectiveness of CUDA in keeping Ampere relevant can cost Nvidia significant growth potential in the years to come if clients shy away from outlaying potentially billions of dollars to upgrade their data center infrastructure.
This may be a rare instance of an Nvidia flex completely backfiring on the company -- but we won't know for sure until a few years from now, when GPU upgrade cycles should commence.
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $562,536!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,096,510!*
Now, it’s worth noting Stock Advisor’s total average return is 981% — a market-crushing outperformance compared to 187% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.
*Stock Advisor returns as of November 24, 2025
Sean Williams has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy.
| 7 min | |
| 19 min | |
| 26 min | |
| 44 min | |
| 45 min | |
| 49 min | |
| 51 min | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 2 hours | |
| 2 hours | |
| Nov-24 | |
| Nov-24 |
Join thousands of traders who make more informed decisions with our premium features. Real-time quotes, advanced visualizations, backtesting, and much more.
Learn more about FINVIZ*Elite