Ad: my AMD investment is up over 39x to date. There may have been plenty of luck involved in picking the stock at first, but not in holding it through the multiple 60%+ declines over the last decade.
You can learn about all the mental models that I’ve used to hold AMD and turn it into a life changing investment in my 2 Hour Deep Diver course. It sells for just $199 and it has been tried and tested with more than 60 students.
These mental models are also behind my succesfull investments in Tesla, Spotify and Palantir. They will also help you avoid the mistakes that I’ve made in the stock market over the past decade, worth over $300K.
The price of the course will go up to $250 soon.
No time to read the update? Watch/listen for free:
Edited by Brian Birnbaum.
1.0 AI at the Edge
The Xilinx acquisition in 2022 was as masterful as it was misunderstood. It has opened the door to an uncontested $200B business for AMD, with Q1 being a pivotal step forward. However, towards the end of this update I discuss something that I found disappointing in the Q1 earnings.
Since the Xilinx acquisition, AMD has been positioning itself to dominate what is expected to be a $200B+ market by the end of this decade: AI at the edge. This is made possible by FPGAs (field programmable gate arrays), the key technology onboarded via the Xilinx acquisition that allows chips to reconfigure themselves on the go.
Going forward, AI will extend its reach beyond data centers and make its way into billions of devices at the edge, and running AI on devices requires unique operational functionality–namely, much higher levels of energy efficiency and overall versatility. FPGAs excel at this function like no other kind of chip can, and Xilinx is the undisputed leader.
The acquisition has therefore set AMD apart from traditional competitors, making it practically impossible for them to compete in this emerging space over the next five years. Going forward, AMD’s FPGA business promises to evolve into something like datacenter GPUs for Nvidia at present.
In the graph below you can see how, just three years before the acquisition, Xilinx had a global FPGA market share of 52%, far ahead of Intel’s 35%. Intel also acquired their way into this space when picking up Altera in 2015, which, at the time, was competing head to head with Xilinx. However, with Pat Gelsinger nowleading Intel AMD’s dominance may be challenged.
Experts explained to me repeatedly over the years why FPGAs would not work for AI inferences. Yet, in Q1 2024, we saw AMD announce its second generation Versal Adaptive SOC (system-on-chip), claiming that this will now enable customers to “rapidly add highly performant and efficient AI capabilities to a broad range of products.”
As I anticipated, it seems that AMD discovered the secret sauce.
AMD has gotten FPGAs to work for AI and is able to deploy them at a marginal cost via its chiplet architecture, which makes connecting different compute engines relatively easy. Notably, hypothetical competitors need to not only surpass AMD’s FPGA technology but also its interconnect technology (Infinity Fabric) if they want to seamlessly connect FPGAs with any other compute engine.
This seems highly unlikely for the foreseeable future, which is why I believe the Xilinx acquisition will pay off many times over the long term. At present, the embedded segment is a drag on AMD’s financials as customers focus on correcting their inventory levels. But, as the AI economy becomes gradually inference-driven, the long term opportunity is vast.
By that I mean that in ten years time, the economy will run mostly on AI models making predictions (inferences). As previously explained, a big percentage of all inferences will happen at the edge, where AMD is optimally suited to serve customers defensibly. Over the long term $200B, may even fall short.
Here’s what AMD CEO Lisa Su said during the call about this new business:
Longer term, we see AI at the edge as a large growth opportunity that will drive increased demand for compute across a wide range of devices.
I maintain an excited attention to Versal’s performance in the periphery. Meanwhile, AMD is simultaneously fighting a battle with its much larger rival Nvidia, with experts showing ambiguous confidence in AMD’s odds of succeeding. Coming up next I explain why I still believe AMD will give Nvidia a run for its money and transcend the current perception of being an also-ran.
2.0 AMD vs Nvidia
The market is falling prey to anchoring bias.
The market thinks AMD’s efforts to disrupt Nvidia’s dominance in the AI space have now been neutralized. Yet, an in depth review of these concerns suggests otherwise.
The market was disappointed with AMD raising the guidance for FY2024 AI GPU revenue to only $4B in Q1 2024, instead of $6B. Although this is far less than Nvidia’s sales, the market is forgoing how quickly AMD ramped the MI300 GPU, which has surpassed $1B in quarterly sales in just two quarters–AMD’s fastest ramp in history.
In my view, the market is falling prey to anchoring bias: it’s rejecting AMD’s potential because the company “failed” to ramp the MI300 within an arbitrarily predetermined time frame. On the contrary, extrapolating the current rate of progress suggests AMD will steal considerable market share over the coming years.
The graph below depicts GPU sales over time. Though progress appears minute relative to Nvidia, AMD’s sales (the black line) are in fact trending up exponentially. AMD’s GPU sales nearly doubled over the course of merely two quarters through Q4 2023. Given our understanding of the power of compounding, it won’t take nearly as long as assumed for AMD to take significant share.
Qualitatively, it was also a great milestone to see Microsoft offer AMD’s MI300 GPUs to Azure customers as an alternative to Nvidia’s processors. It takes a while for customers to trust a new chip and lean on it at scale. This is a great first step forward that may compound if Microsoft chooses to continue scaling out the MI300.
Further, AMD investors and analysts were concerned about Nvidia’s new Blackwell chip, which is composed of chiplets. However, as I explain in my last Nvidia update, Blackwell is actually made of two monolithic chips that act as chiplets at the networking level, which does not free Nvidia from the complexities of the monolithic approach.
Therefore, the chiplet thesis, depicted in the graph below, remains in play. As we move towards smaller process nodes, the complexity of Nvidia’s approach grows exponentially, with AMD’s far less so. AMD thus remains in a position to iterate its way to similar if not superior levels of compute performance, and at lower cost. Further, AMD’s chiplet architecture gives them an advantage on the inference side.
Here’s what Lisa Su said during the call about inference:
Right now, I think MI300X is in a sweet spot for inference, very, very strong inference performance.
I see as we bring in additional products later this year into 2025, that that will continue to be a strong spot for us. And then we're also enhancing our training performance and our software road map to go along with it.
In the battle over inference, AMD is tackling major complexity on the software side, whereas Nvidia confronts a near insurmountable barrier to entry. However, by pivoting towards an open source format, AMD has a chance of succeeding. Even if AMD fails in disrupting Nvidia, I believe its roadmap is differentiated enough for the company to succeed over the long term regardless as previously explained.
Hence, the asymmetry of the thesis.
Lastly, the market is also not discounting the versatility of AMD’s chiplet architecture, which allows AMD to iterate its products faster. AMD doesn’t have to rebuild a chip to modify it. The company can simply tweak it, as demonstrated by the difference between the MI300A and the MI300X.
The MI300A contains three CPU tiles where the MI300X contains three GPU tiles, ultimately catering for decidedly disparate end applications.
During the call, Lisa Su explained how the chiplet design allows AMD to work with customers closely and how they plan multiple generations ahead, addressing the market’s fears about AMD’s AI GPUs falling behind:
[….] when we start with the road map, I mean, we always think about it as a multiyear, multigenerational road map. So we have the follow-ons to MI300 as well as the next, next generations well in development.
I think what is true is we're getting much closer to our top AI customers, they're actually giving us significant feedback on the road map and what we need to meet their needs.
Our chiplet architecture is actually very flexible. And so that allows us to actually make changes to the road map as necessary. So we're very confident in our ability to continue to be very competitive.
3.0 AI PCs
Microsoft has put AMD in the back seat for AI PCs, for now.
Although I am confident in AMD’s FPGA/inference roadmap and its ability to take market share from Nvidia, I am disappointed with some events that have unfolded after the Q1 earnings.
During the Q1 call, Lisa Su expressed confidence in AMD’s ability to be a key player in the AI PC (artificial intelligence personal computer) space and made great emphasis on the partnership with Microsoft. But during their latest AI event, Microsoft put AMD in the back seat and promoted Qualcomm to top partner.
Here’s what Lisa said about this during the call:
We see AI as the biggest inflection point in PC since the Internet with the ability to deliver unprecedented productivity and usability gains.
We're working very closely with Microsoft and a broad ecosystem of partners to enable the next generation of AI experiences powered by Ryzen processors, with more than 150 ISVs on track to be developing for AMD AI PCs by the end of the year.
I found this mismatch fairly disappointing. The development is worth watching closely because Qualcomm develops ARM (Advanced RISC Machine) chips, which essentially have a better energy efficiency than x86 processors (the ones Intel and AMD use), because they run on a simpler instruction set.
Until now the PC industry was not adopting ARM processors because most of the software was written for x86 processors. Transitioning to ARM chips therefore requires rewriting a lot of software but, since AI is so computationally intense, it seems that Microsoft now believes moving to ARM is worth it.
I will be analyzing this situation in depth over the coming quarter and will share my thoughts on the next quarterly update, so stay tuned for that.
4.0 Conclusion
AMD’s financials remain in great shape and the company is optimally positioned for AI at the edge. I need more clarity in the battle between the x86 and ARM architectures for PC.
At the end of the quarter, AMD had $6B in cash and just $0.75B in short term debt and $1.7B in long term debt. The balance sheet therefore remains in great shape, affording AMD plenty of dry powder to continue investing in AI.
Cash from operations came in at $521M for the quarter. You will observe in the graph below that this metric has been trending down because of AMD’s client, gaming, and embedded segments, which have been correcting over the past two years. As these businesses revert to growth going forward, cash from operations should continue trending back up.
Although I am disappointed about the situation with Microsoft, AMD’s roadmap continues to evolve, positioning the company to dominate AI at the edge. FPGAs are a vital technology in this domain and AMD has a clear lead, if not a cornered resource.
Until next time!
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc
Very insightful article, thank you