Edited by Brian Birnbaum and an update of my original AMD deep dive.
A year ago, AMD issues $2B AI GPU revenue guidance for 2024. Mr. Market is delighted and the stock rises to ~$210. 2. Now guidance is up to $5B for 2024 and the stock is at $142.
Here’s why the market is wrong and why AMD is setting itself up to become an AI giant:
AMD has presented itself as Intel’s nemesis by investing in and leveraging its chiplet platform to collaborate with customers more closely and iterate its products faster. The fact that Intel has retained more market share in CPU and PC segments is actually good news for AMD investors. With better products both at present and in the pipeline, AMD stands to capitalize on both expanding demand and stealing market share.
Importantly, however, AMD had designed highly competitive CPU engines years before gaining traction in the marketplace. And today, AMD is pursuing the exact same strategy to gain market share in AI GPUs. Meanwhile, the market continues to grow impatient, with the stock down 9% the day after the recent earnings print.
Meanwhile, in Q3 2024 AMD revised AI GPU sales guidance for FY2024 to $5B, having recently increased the guidance from $2B to $4.5B. Meanwhile, the Datacenter segment has begun exhibiting exponential growth, as per the graph below. Datacenter revenue accounted for 52% of total revenue this quarter, now AMD’s predominant source of revenue.
With the stock down 27% from all-time-highs, the market is scant rewarding AMD for these achievements. The prevailing view seems to be that AMD will fail, simply because it hasn’t caught up with Nvidia. This fundamentally ignores the main driver of AMD’s business, which is trust. No matter the level of their technology, customers still need time to gain the trust sufficient to make large CapEx investments. This was true for CPUs and will be true for all future compute engines as well.
Towards the end of the Q3 Q&A, Lisa was asked why AMD expected less than $10B in AI GPU revenue, while Nvidia expected $50-$60B. Understanding Lisa’s answer requires a certain degree of wisdom and intricate understanding of long-term value creation:
If you remember, Harsh, and I think you do, our EPYC ramp from Zen 1, Zen 2, Zen 3, Zen 4 we had extremely good product even back in the Rome days, but it does take time to ensure that there is trust built, there is familiarity with the product that there are some differences, although we're both GPUs. There are some differences, obviously, in the software environment. And people want to get comfortable with the workload ramp.
So, from a ramp standpoint, I'm actually very positive on the ramp rate. It's the fastest product ramp that I've seen overall. And my view is this is a multi-generational journey. We've always said that. We feel very good about the progress I think next year is going to be about expanding both customer set as well as workload.
And as we get into the MI400 series, we think it's an exceptional product. So -- all in all, the ramp is going well, and we will continue to earn and -- earn the trust and the partnership of these large customers.
Lisa has orchestrated one of history’s most successful corporate turnarounds to date. She’s consistently underpromised and overdelivered. I always take her words seriously. But they aren’t just words. The past quarter contained a number of notable milestones. In Q3 alone, Meta deployed more than 1.5M EPYC CPUs across their datacenters powering their social media platforms. They “broadly” deployed the MI300X to power their inferencing infrastructure “at scale.” Additionally, Lisa stated in the call that AMD is now working closely with Meta to “expand their Instinct deployments to other workloads where MI300X offers TCO advantages, including training.”
In my last Meta update I explain how, according to Zuckerberg, Meta’s upcoming Llama 4 model will require ten times more compute power than its Llama 3.1 model, which runs on AMD’s MI300 for inferencing. As AMD continues working closely with Meta, there’s a good chance AMD will also power Llama 4’s inferencing capabilities.
Lisa also said that Microsoft is now using MI300 “broadly” for multiple copilot services that are powered by the GPT4 family of models. As outlined in my latest Microsoft update, these copilots are driving real incremental productivity for users, making it likely that AMD will do more business with Microsoft going forward. That Meta and Microsoft are leaning heavily into the MI300 supports Lisa’s statements on the Q3 call. To top things off, Microsoft and Oracle expanded the availability of MI300 instances in their respective public clouds.
As with the CPU and PC revolution, AMD is only getting started.
Going forward, AMD’s annual product cadence should continue to expand, deepening their relationships with customers. Serving the top computing giants on the planet validates AMD’s technology and, most importantly, the company’s ability to collaborate closely with customers and build products they want.
Meanwhile, the MI325X launched earlier this month, with 20% higher inference capacity than the MI300X. The MI350 is on track to launch in H2 2025, and the MI400 is on track to launch in 2026. The upward revision in AI GPU revenue guidance from $4.5B to $5B is due to customer engagement “broadening,” according to Lisa. Per Lisa’s remarks in the Q3 earnings call, AMD is seeing customers increase the range of workloads run on Instinct accelerators. AMD seems to have an edge in inferencing, as I predicted in my original AMD deep dive. Inferencing has, in effect, put their foot in the door. Now customers are beginning to use Instinct accelerators for training workloads.
Here’s what Lisa said about this during the Q3 earnings call:
So certainly, from the $5 billion [AI GPU revenue] that we're talking about, the early traction has been primarily with inference just given the strength of the product portfolio. MI300 is like very, very well optimised for inference given the memory capacity and memory bandwidth capabilities.
But we have had some training adoption, and we expect that, that will continue to grow as we go through the next few quarters. And so, as we -- let's call it fast forward a year, I would say we would have a fairly balanced portfolio between training and inference.
Additionally, the operating margin of the Datacenter segment came in at 29% in Q3, up from 19% in the same period a year ago. Since then revenue has increased 122%, lending tremendous weight to the 1,000-bps increase in operating margin and suggesting a step change in operating leverage despite the rapid increase in volume. Again we see the results of excellent management. AMD isn’t lowballing Instinct prices to drive early adoption. Customers are finding real value in AMD’s accelerators.
How one views AMD’s progress is a matter of duration. With a sufficiently long term investment horizon, it would seem that AMD is executing tremendously well.
Meanwhile, the Client segment is progressing well with AMD expecting more than 200 Ryzen AI Pro commercial platforms in the market in 2025. This processor represents AMD's third generation of AI-enabled mobile processors for commercial use. The most distinctive feature is two NPUs (neural processing units), enabling accelerated AI workloads. Next year Microsoft will cease all tech support for millions of Windows 10 PCs. AMD’s commercial AI platforms position the company to cash in on the resulting PC refresh cycle. Consumers will shift towards AI-based PCs, with or without a recession.
Client segment revenue increased 29% YoY, driven by “strong demand” for the latest generation Zen 5 notebook and desktop processors. To be clear, Zen 5 is a component of the Ryzen AI Pro 300 series, providing the core CPU architecture. The Ryzen AI Pro 300 series combines this Zen 5 CPU with other technologies like the RDNA 3.5 GPU and the aforementioned XDNA 2 NPU, along with additional security and management features designed for business use.
The progress in the Datacenter and Client segments has been heavily diluted by the cyclical decline of the Gaming and Embedded segments. Gaming revenue declined a whopping 69% YoY and Embedded revenue declined 25% YoY. I’ve seen AMD manage many a cyclical downturn, and there’s nothing to make me think they won’t this time around.
The Embedded side caught my eye–AMD’s Versal AI Core Adaptive SOCs (system-on-chips) will power SpaceX’s next generation satellites. In my original AMD deep dive I explain how AI will be everywhere and how AMD’s FPGA uniquely situates the company to run AI workloads in devices at the edge. In Q1 2024 the company announced its second generation Versal chip and is now powering satellites for perhaps the most promising space company on Earth. I believe this is but a small taste of AMD’s adaptive technology’s capabilities, as AI emerges from the datacenter and into the real world.
My long-term AMD thesis remains intact. I see the company making great progress; the pieces are coming together for AMD to become an end-to-end AI giant. I believe in five years time the company’s income statement will hardly be recognizable.
Until next time!
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc
https://semianalysis.com/2022/06/19/die-size-and-reticle-conundrum-cost/#smaller-chiplets-isnt-always-better
Thanks, opened AMD position today at 133