Edited by Brian Birnbaum and an update of my original AMD thesis.
AMD presents a similar risk-reward to Meta and Amazon in 2022 and is likely the next trillion-dollar AI giant.
Despite AMD’s fundamentals being stronger than ever, the shareholder base is more depressed than it’s been in a decade due to obsessive comparisons to Nvidia at a face value. Because shareholders continue to ignore AMD’s highly differentiated potential, my thesis remains intact: AMD’s intrinsic value will increase by at least an order of magnitude over the coming five years and thus become one of the largest AI companies on Earth.
Indeed, I believe AMD is a multi-trillion-dollar company in the making.
AMD’s annual revenue came in at a record $25.8B in 2024, which is 6.6X higher than 2015’s revenue. Their growth has largely been the result of partnering and working with big customers closely to deliver the best CPUs possible. Their chiplet platform has enabled rapid product iteration. Over the long term, the close collaboration coupled with the speedy iteration has translated into much higher revenue and margins.
In the graph below you can see how revenue and gross margins have increased in tandem since 2015, driving much of the (former) enthusiasm for AMD stock.
AMD’s more recent and extraordinary fundamental progress is not visible in the numbers yet. In 2024, AMD created the foundation for its AI GPU business to kick into the next gear of growth, likely far more explosive than investors enjoyed over the past ten years. The company forged critical bonds with Meta, Oracle, and Microsoft, which are getting increasingly comfortable with AMD’s technology.
The odds of AMD’s AI GPU business growing exponentially from here are meaningful, running the same playbook used to grow its CPU business. Every quarter I see select hyperscalers leaning further into AMD’s GPUs, thus incrementally validating AMD’s strategy. As I have highlighted in the past, Meta runs all Llama inferences exclusively on AMD’s MI300X, which Lisa reiterated during the Q4 2024 earnings call:
Meta exclusively used MI300X to serve their Llama 405B frontier model on meta.ai and added instinct GPUs to its OCP-compliant Grand Teton platform, designed for deep learning recommendation models and large-scale AI inferencing workloads.
AMD shareholders have grown despondent because of realistic expectations for growth on an unrealistic timeline. They believed AMD could take massive share from the AI GPU market not too long after Nvidia imposed its dominance. While it’s true that technology in general is prone to rapid shifts, the semiconductor business still very much relies on interpersonal dynamics. Before a company commits to a certain semiconductor brand fully, management must build trust.
Which, for those paying attention throughout 2024, they did. Datacenter revenue exploded from $0.8B in Q2 2021 to $3.9B in Q4 2024–a 4.9X increase in just three-and-a-half years. The technology roadmap is getting stronger, which promises to further accelerate growth. AMD’s rapid progression, depicted in the graph below, is hard to reconcile with current pessimism surrounding the stock.
The thing that most caught my attention during AMD’s Q4 2024 earnings call is Lisa’s mention of AMD successfully using ROCm to carry out several at-scale deployments with hyperscalers:
We have also successfully ramped large-scale production deployments with numerous customers using ROCm, including our lead hyperscale partners.
In my last AMD post I mentioned that the rapid integration of DeepSeek-V3 made me feel AMD was rapidly productising its software. Running various large scale deployments in parallel requires operational leverage, and as a result AMD’s software seems to be getting easier to deploy and operate. Lisa’s remarks (above) further suggest that AMD is making good progress on the software side.
As AMD continues to iterate on its ROCm software, I believe it will eventually be productized for and adopted by the broader market. The software is already capable enough for Meta to bet big on it. Simply continuing to improve and productize promises to accelerate the rate at which AMD takes market share in the AI space.
Analysts were upset with the data center guidance for Q1 2025, which I believe is irrelevant for long term investors. As Lisa said during the call, the segment is poised to achieve an annual revenue run rate of tens of billions in the next couple of years:
We have made outstanding progress building the foundational product, technology and customer relationships needed to capture a meaningful portion of this market.
And we believe this places AMD on a steep long-term growth trajectory, led by the rapid scaling of our data center AI franchise for more than $5 billion of revenue in 2024 to tens of billions of dollars of annual revenue over the coming years.
The market’s reaction to Musk’s and Karp’s evangelical earnings calls was exuberant. It’s therefore strange that the market continues willfully ignoring Lisa Su’s bullish calls.
As with Meta and Amazon in 2022, the market isn’t understanding the importance of AMD’s latest investment cycle. Starting with the Xilinx acquisition in late 2020, AMD made a series of investments to set the foundation for much greater success over the coming decade. The resulting hit on earnings has been exacerbated by the cyclical downturns of the gaming and embedded segments.
This is well evidenced by the decline in free cash flow per share, as seen in the graph below. The pessimism holds a near 1:1 correlation with this metric, which is now rebounding. Going forward, as datacenter revenue continues to accelerate and gaming and embedded bounce back in 2025, AMD’s free cash flow per share is set to grow rapidly from here, driving the stock price up as well.
Lisa said during the Q4 2024 earnings call that she expects the demand environment to improve across all of AMD’s business in 2025:
For 2025, we expect the demand environment to strengthen across all of our businesses, driving strong growth in our data center and client businesses and modest increases in our gaming and Embedded businesses.
Against this backdrop, we believe we can deliver strong double-digit percentage revenue and EPS growth year-over-year.
As with Amazon and Meta in 2022, the market is extrapolating temporary financial weakness. We also saw this recently with Netflix. Generally speaking, given a sufficient period of time, the market always finds compelling reasons to sell extraordinary companies at a discount.
Further, AMD’s upside extends beyond GPU, gaming, and embedded segments. As I have explained previously, AMD’s chiplet architecture gives it a unique advantage in the emerging inference market, which promises to be orders of magnitude bigger than the training market. Additionally, AMD’s Xilinx acquisition opens unto an uncontested $200B AI at the Edge business, which the recent deal with SpaceX demonstrates.
And we’re still not done. AMD’s AI PC business is taking off, with AMD CPUs number one across the board. Client segment revenue came in at a record $2.3B on Q4 2024, further sign of extraordinary fundamental advancements.
Attractive on their own, but AMD’s chiplet platform also creates synergies between these opportunities that the market fails to recognise. Chiplets enable AMD to mix and match compute engines at will, taking on specific workloads with a total cost of ownership that other competitors can’t replicate, yielding tremendous earning power and margins over the coming decade.
For example, chiplets allow AMD to inject laptops with AI capabilities by adding FPGAs to CPUs. They can do this at a marginal cost because they have the leading CPUs and FPGAs, and they’ve been working on honing their chiplet platform for a decade now. In fact, this is how they’ve been able to take on the AI PC opportunity. Not surprisingly, operating income as a percentage of revenue is up to 19% from 4% a year ago for the client segment.
Except for the two temporarily declining businesses (gaming and embedded), AMD is exhibiting notable operating leverage across the board. Datacenter operating income as a percentage of revenue is up to 30% from 29% a year ago. I believe this trend is due to AMD having to spend less incremental dollars than otherwise to capitalize on new compute workloads as they emerge. The cyclical decline of the gaming and embedded businesses is masking progress here.
For instance, in the interview below you will hear Lisa Su explain how AMD designed the MI300X to be highly competitive in inference workloads. They were able to do this at a marginal cost, because their chiplet platform competitively enables them to add more memory on-chip than other competitors can. This enables them to competitively deliver a lower total cost of ownership for inference workloads, as evidenced by the inference deal with Meta and the rising operating leverage.
AMD’s platform enables them to prioritize workloads. For instance, inference is the top of the funnel for the AI industry. Customers don’t want to spend all their time training models. They want to run useful AI applications. Inference permits application and thus represents how the value of the entire AI supply chain is delivered. By extension, it’s the most valuable touch point with customers, setting AMD up to address additional workloads further down the funnel, like training.
Lisa’s remarks during the Q4 2024 earnings call were highly insightful:
People like the work that we've done in inference. But certainly, our customers want to see us as a strong training solution. And that's consistent with what we've said, right? We've said that we have like a step-wise road map to really show each one of those solutions.
AMD was planning on releasing the MI350 in H2 2025, but the launch has been pulled forward into the midyear. Customers are increasingly comfortable with AMD’s technology and plan to use the hardware for more than inference, since it’s a more powerful chip. Here’s what Lisa said about this during the Q4 2024 earnings call:
I would say from our standpoint, we've gotten incrementally more positive on the 2025 data center GPU ramp.
I think MI350 series was in second half always, but pulling it into midyear is an incremental positive. And from a first half, second half statement, as I mentioned, we have some new important AI design wins that are going to be deployed with and MI300 and MI325 in the first half of the year.
But with MI350 series, we end up with more content. I mean it is a more powerful GPU, ASPs go up, and you would expect larger deployments that include training and inference in that time frame. So the shape is similar to what we would have expected before.
This is why AMD has far more upside than the market currently believes. They’re positioned to pursue numerous businesses worth hundreds of billions of dollars, at a marginal cost, in a way that’s increasingly harder for competitors to replicate. In the Q4 earnings report, I see considerable evidence for the thesis. The deals with some of the most powerful AI companies on Earth point to the competitive nature of AMD’s products. Rising operating leverage is testament to the platform’s versatility.
Meanwhile, AMD shareholders are discouraged by the confluence of various narratives, exacerbated further by the aforementioned cyclical downturns. At present my original AMD investment is up 25X, but just a year ago it was up over 50X. I believe soon enough it will become my first 100X investment, as the power of AMD’s roadmap translates into profits.
Until next time!
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc
No
Hi Antonio. What are you thoughts on this new: https://t.co/3RD6enQiAL