|
|||||
|
|
Once again, NVIDIA (NVDA) numbers and guidance knocked the cover off the ball in their Q4 report last Wednesday.
NVDA shares were NOT priced for perfection beforehand. And yet the company delivered perfection and then some -- which isn't easy at their scale.
To put the results in context, let's review what I wrote on November 23 after their Q3 report...
"Before NVIDIA's Q3 report last week I pointed out that sales forecasts by analysts for next year were still way too low at $275 billion and I told my followers and journalists that six months from now that number would be over $325 billion.
"While the beat-and-raise quarter met selling and uncertainty, the analysts had to crunch the numbers in their spreadsheet models... and what did they do?
"Yep, they raised next fiscal year from $275B to a consensus of $293B with the high estimate moving up to $327B."
Since then, the consensus topline estimate for FY'27 (began in February) crept up near $315 billion as of early last week before the Q4 report.
Large Investors Sold This Report Too
Thursday and Friday last week brought more sellers than buyers to the NVIDIA bazaar. But that fact spells opportunity for savvy technology-focused investors.
Here were my notes to journalists on Wednesday evening that tell you why you want to be buying NVDA below $180...
Get ready for that revenue estimate of $315 billion to go significantly higher because Jensen delivered all the goods and more by answering every possible doubt an analyst or investor could have.
Based on the following -- not just another huge "beat-and-raise" quarter -- I am raising my revenue estimate for this year to $375 billion:
Q4 gross margins climbed back to 75% which means pricing power is strong despite higher input costs. And stellar Q1 guidance from CFO Colette Kress of $78 billion topline -- nearly $6 billion higher than prior consensus -- and 75% GM does not presume any sales to China.
Vera Rubin systems will be shipping in the second half. There were definitely legitimate doubts given the memory shortage, skyrocketing HBM prices, and other supply constraints. Doubts laid to rest.
Speaking of memory access, they revealed a big jump in supply-related commitments from $50 billion to $95 billion. This is clear strike to tie up valuable components, wafers, and memory to ensure they can meet demand for Blackwell and Rubin. "We have strategically secured inventory and capacity to meet demand beyond the next several quarters."
Jensen's repeated theme during analyst Q&A: Hyperscaler and model builder capex -- now seen at $700B this year -- will sustain for years to hit $3-4 trillion opportunity by 2030 because their cash flows will sustain from agentic AI. "I am confident in their cash flow growth... because of agentic AI."
The Jensen Equation: Capex = Compute = Inference = Revenues
Another big theme that came through in Jensen's remarks during the analyst Q&A was WHY hyperscaler capex was sustainable from a fundamental business growth perspective, not simply that they could generate ROI and positive cash-flow from their investments.
And it instantly reminded me of the way I explained it last summer: R-I-O not ROI.
My "RIO" acronym stands for "relevant instead of obsolete." All the hyperscalers, CSPs (cloud service providers), plus Tesla xAI, are looking at a future 3-5 years from now where their current business models could be severely disrupted by agentic AI.
So they know they need to build the infrastructure now to do deep R&D and find out what products and services they might be selling in the future that will move the needle if and when their current cash cows are put out to pasture.
But enough about my theories, let's hear from Jensen himself. Here's what he said on the conference call when answering a question from the Citi analyst about CUDA...
“Without CUDA, we wouldn’t know what to do with inference.
“NVLink 72 has enabled us to deliver generationally 50 times more performance per watt… performance per dollar, 35 times… the leap in inference is incredible.
“It’s really important to realize that inference equals revenues… agents are generating so many tokens… when the agents are coding, it’s generating thousands, tens of thousands, hundreds of thousands… running for… minutes to hours… these agentic systems are spawning off different agents… the number of tokens… [is] exponential.
“Inference performance equals revenues for our customers. Tokens per watt… translates directly to the revenues of the CSPs… everybody is power limited… Tokens per watt translates to dollars per watt… which translates in a gigawatt directly to revenues.
“Every CSP understands… CapEx translates to compute… compute with the right architecture translates to maximizing revenues… compute equals revenues… choosing the right architecture… the one with the best performance per watt is literally everything.”
That's why I summed up Jensen's big theme this way: Capex = Compute = Inference = Revenues.
You could also say, no investment = no growth. And NVIDIA is delivering the highest performance at the lowest cost per token per watt to help the builders control their costs while they invest and build and innovate.
For CSPs and Enterprise it's a high-margin pivot toward AI inference using CUDA-integrated stacks that are backwards compatible with each generation. The best example is 6-year old H100s still running and oversubscribed.
Here were a few more of my notes to reporters last week...
Anthropic and OpenAI are severely compute constrained and can't meet all the demand for inference. That's lost money to them. They will keep building to grow and make more tokens. In the new computing world of AI factories, tokens are the new currency and their production will demand ever-increasing infrastructure build.
Jensen explains the NVIDIA edge: CUDA is already everywhere, in every cloud, to build on, and "everything is already built on CUDA." 1.5 million models on HuggingFace run on CUDA. The CUDA ecosystem is what makes NVIDIA systems "super-fungible."
R&D spend will be $20 billion as NVIDIA invests in "extreme co-design" with partners on networking, chips, algorithms, and software.
Colette handled questions about revenue mix and concentration: while hyperscalers/CSPs are 50%, the other 50% is very diverse and growing, including model makers, enterprises, super-computing science research at universities, and sovereign nation-states who need to invest "proportional to their GDP."
The Networking Protocol
Another standout number from the NVIDIA quarter was +263%. That's the amount that the segment delivered in year-over-year growth, with an $11 billion haul to hit $31 billion for the year!
Putting that in perspective, Arista Networks (ANET) projected topline for this year is only $11.25 billion. And Cisco doesn't account for that much sales in their related segment.
The birth of NVIDIA networking to connect all those GPU rack-scale systems began with Jensen's purchase of a company called Mellanox in 2020 for about $7 billion.
Why is this important and what does it mean?
For starters, you have to continuously listen to NVIDIA to hear about what they are building and why. I attended a webinar on February 5 titled "Powering Gigascale AI Factories With NVIDIA Networking" that was led by Gilad Shainer, SVP of Networking, NVIDIA.
From scale-up to scale-across, it was a masterclass in the future of AI networking. And it gave me deeper insights about where Jensen was headed. Of course, I didn't see the $11 billion quarter for their networking segment, but it quickly made sense to me.
On Thursday afternoon, MarketWatch reporter Britney Nguyen reached out and she asked me the right question, which made me give the right answer:
Britney (rough paraphrase): "Is the networking growth of 263% a big deal... and the fact Jensen says they have the largest networking business now... can it lift the stock?"
Me (short answer paraphrased):"It won't lift the stock above this cloudy sentiment. But it is a huge deal because it's the vital piece of Jensen being able to sell thousands of rack-scale systems that connect, calculate, and create tokens of intelligence with ultra-low latency. He doesn't want to just hand them GPUs -- he wants to show them how to scale-up, scale-out, and scale-across the 'AI factories' he envisions. NVIDIA’s end-to-end networking powers gigascale AI -- from NVLink scale-up, to NVIDIA Photonics for efficient scale-out, to NVIDIA Spectrum-XGS Ethernet for scale-across to connect the separate AI factories (still known as datacenters to those with lesser technology)."
Nvidia's networking revenue acceleration to +263% is a strong signal for future data center demand. As Jensen said on the call "our single greatest lever is generational leaps in performance."
But I wouldn't be me if I didn't think of another way to get this point across. So here it is in Truth Social all-caps for the hard of hearing...
"IT DOESN'T MATTER IF NVIDIA NETWORKING IS A GROWTH SEGMENT BIGGER THAN CISCO OR ANET -- WHAT MATTERS IS THAT THIS IS HOW JENSEN DELIVERS THE ULTIMATE AI INFERENCE PACKAGE THAT AI FACTORY BUILDERS CAN'T GET ANYWHERE ELSE. NVIDIA NETWORKING IS AS ESSENTIAL TO GPU RACK-SCALE SYSTEMS AS YOUR BODY'S CIRCULATORY SYSTEM IS TO YOUR HEART."
In other words, the growth of NVIDIA networking is a natural and vital necessity for the growth of Jensen's annual cadence of new platforms that demand the lowest latency. That's why Jensen's big shift to "silicon photonics" at GTC last March was so profound. Although, I must admit I was a little hard of hearing about the implications for optical chip makers like Coherent (COHR) and Lumentum (LITE), which I didn't buy until $350.
In more words, when a customer buys a Blackwell GB200 rack, they aren't just buying GPUs -- they are buying the NVLink Switch System and Spectrum-X Ethernet cards that come bundled in the architecture. This vertical integration in rack-scale systems sold as "the complete package" makes it difficult for standalone competitors like Arista to compete for that specific hardware spend.
If you buy the best GPU systems -- with the best algorithm libraries in CUDA -- don't you also want the best networking chips and other connection hardware that goes with it? Otherwise it would be like buying a Tesla (TSLA) without Elon's FSD.
In Conclusion: Growth at the Epicenter of AI is On Sale
As of Saturday morning (when most analyst estimate updates from Thursday and Friday were compiled in the Zacks database), the topline for this year (FY'27 began February) is $337 billion -- 56% growth -- with a high estimate of $377B.
You can see the updated Zacks Detailed Estimates Page which now includes next year's sales and earnings estimates (FY2028 begins next Feb) calling for a topline ramp of 29.6% to $436.5 billion.
I expect the topline to keep climbing as Vera Rubin systems are delivered in the second half.
On the bottom line, we have a curious case of the consensus coming down a penny to EPS of $7.39. Not all the estimates are in though yet, so we will see some adjustments all next week.
Regarding innovation, Jensen unveiled 6 new chips at CES for Blackwell. We can probably expect new chips for Rubin at GTC on March 16.
Finally, Jensen said he's going to share ideas at GTC on how to incorporate the new Groq technology -- that he just paid $20 billion for -- and "extend our architecture like we did with Mellanox." Based on what I'm reading from several independent semiconductor analysts and engineers, what Jensen has planned with Groq and new architectures for memory and storage will likely blow our minds and decimate the competition.
Talk to you soon with notes from GTC about the future of the computing platform that every organization must have. Make sure to grab some NVDA before then.
Kevin Cook is a Senior Stock Strategist for Zacks Investment Research where he runs the TAZR Trader portfolio and holds NVDA, TSM, and LITE.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
This article originally published on Zacks Investment Research (zacks.com).
| 20 min | |
| 22 min | |
| 22 min | |
| 31 min |
Tesla's China EV Rivals Hit By Lunar New Year Holidays, But Nio Stands Out
TSLA
Investor's Business Daily
|
| 39 min | |
| 50 min | |
| 57 min | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 1 hour | |
| 2 hours | |
| 2 hours |
Join thousands of traders who make more informed decisions with our premium features. Real-time quotes, advanced visualizations, backtesting, and much more.
Learn more about FINVIZ*Elite