Edited by Brian Birnbaum and an update of my original AMD thesis.
The market is ignoring the vast potential of AMD’s FPGA technology, which paves the way for an uncontested $200B business. This would be the second time Lisa Su decides to pursue an opportunity that the market doesn’t understand.
Last time she did that, the stock multiplied 25X from the market’s peak point of misunderstanding.
Since acquiring Xilinx, AMD has been positioning itself to lead what is projected to become a $200B+ market by the end of the decade: AI at the edge. This strategy is powered by FPGAs (field-programmable gate arrays), the breakthrough technology brought in through the Xilinx acquisition that enables chips to dynamically reconfigure in real time. This property promises to enable FPGAs to run neural nets more efficiently in smaller devices.
Looking ahead, AI will expand beyond datacenters, integrating into billions of edge devices. Running AI on these devices demands specific capabilities—namely, superior energy efficiency and versatility. FPGAs excel in this regard, far surpassing other chips, and Xilinx stands as the undeniable leader in this domain. The distant runner up is Altera, which was acquired by Intel in 2015. With Intel struggling to remain competitive, AMD Xilinx sees blue skies ahead.
As a result, the acquisition has positioned AMD far ahead of its traditional rivals, making it nearly impossible for them to challenge AMD in this rapidly growing space over the next five years. AMD’s FPGA division is set to become a crucial part of the AI industry, similar to how Nvidia’s GPUs dominate the datacenter market today. The AI at the Edge market is yet to blossom, but once it does, AMD stands to benefit disproportionately.
To understand why FPGAs are crucial to this new era of AI, we first need to explore how they work. A Field-Programmable Gate Array (FPGA) is a type of integrated circuit that can be reprogrammed after manufacturing to perform a specific task. Unlike CPUs and GPUs, which are designed with a fixed architecture for general-purpose or parallel computing tasks, FPGAs can be customized for specific workloads on the fly. Their malleability is ideal for tasks that require hardware optimization.
In the context of AI, FPGAs offer a unique advantage. Neural networks, which power AI models, require a large number of mathematical operations to process inputs and generate outputs. These operations are computationally intensive and require specialized hardware for efficient execution. FPGAs allow for the creation of custom circuits that can directly map AI operations to hardware, such as matrix multiplications and convolutions, dramatically reducing power consumption, latency, and computational cost compared to general-purpose CPUs and even GPUs.
In addition, FPGAs excel in low-latency environments. This is crucial to AI inference, especially when running on devices at the edge–think smart cameras, drones, and autonomous vehicles–where decisions need to be made in real-time. While GPUs and CPUs are fantastic for training models in large datacenters, FPGAs are a much better fit for the needs of inference in small, power-constrained devices. Their adaptability allows them to execute highly specialized AI functions on demand, attaining levels of efficiency that other compute engines can’t possibly.
While Nvidia has made significant strides in AI, particularly in the data center, its GPUs are not optimized for edge computing. Nvidia’s chips are power-hungry and require specialized infrastructure to deploy, less than ideal for devices like smartphones, IoT devices, and autonomous drones. On the other hand, AMD’s FPGA technology is tailor-made for these environments, where power efficiency and real-time processing are critical.
This is not to say that Nvidia won’t be able to create similar technology. However, Xilinx’s lead in the FPGA market is further compounded by AMD’s chiplet platform. Xilinx’s FPGAs are already widely used in industries such as telecommunications, automotive, and aerospace, but their true potential lies in the rapidly growing field of AI. The combination of AMD’s chiplet architecture and Xilinx’s FPGA technology enables AMD to create highly efficient, low-power solutions that can run AI inferences at the edge with greater flexibility than Nvidia’s monolithic GPU designs or that of any other competitor.
Indeed, on their own, FPGAs aren’t particularly useful for AI at the edge. Rather, they need to be attached to other general-purpose compute engines–and that’s where AMD excels. They’ve been perfecting their chiplet platform for over a decade now. Although AMD has been relatively quiet regarding advancements with Embedded (which contains Xilinx operations), in Q1 2024 AMD Lisa Su shared some insightful remarks:
Versal Gen 2 adaptive SoCs are the only solution that combine multiple compute engines to handle AI preprocessing, inferencing, and post processing on a single chip, enabling customers to rapidly add highly performant and efficient AI capabilities to a broad range of products.
Some say that FPGAs (or similar programmable engines) won’t fulfil their potential at the edge. They cite that they’ve been around for a while without any significant impact on AI. But Lisa’s remarks demonstrate how AMD is making real progress towards the bull case herein. AI at the edge hasn’t yet reached critical mass. AMD’s unique ability to integrate FPGA chips with any other compute engine will explode onto the scene once the applications come into widespread use. Even still, customers are already buying the products emerging from this structural advantage.
This underappreciated aspect of the AMD thesis is what makes me most bullish about the company. Indeed, in 2015 Wall Street pushed Lisa Su to get into tablets. She decided to pursue a different route, one the world didn’t quite understand, and now the stock trades 25X higher. I see the same dynamic unfolding again, as Lisa bets big on programmable compute engines and the market continues to look in the other direction.
AMD’s strategy to blend existing CPU and GPU products with FPGAs reveals the boundless opportunities within AMD’s highly modular platform. AMD’s chiplet-based platform, which allows for unprecedented modularity and adaptability, offers lower total cost of ownership (cheaper compute overall) for customers while providing more tailored solutions for a variety of workloads. As AI matures it will likely require compute workloads that we can’t imagine yet, upon which AMD’s platform will be uniquely suited to capitalize.
This imbues prodigious asymmetry into the thesis. There’s no guarantee that AMD’s FPGA technology will remain more efficient and effective, but the optionality of AMD’s platform is vast and will likely yield novel and lucrative businesses that we can’t envision today. As the rising operating leverage seen in Q4 2024 demonstrates, this platform enables AMD to take on new businesses at a marginal cost.
As mentioned above, real-world applications for AMD’s FPGA-powered solutions are already coming into focus, albeit discretely for now. In Q1 2024 AMD announced that its Versal AI Core Adaptive SOCs are now powering SpaceX’s next-generation satellites. This marks a significant milestone as the company’s cutting-edge technology plays a crucial role in one of the most innovative space ventures today, supporting the notion that AMD’s FPGA technology is unique.
Thus, while traditional analysts have focused on the competitive dynamics between AMD and Nvidia in the AI training market, they have largely overlooked the massive potential of AMD’s platform, particularly in inference–which is the major component of and requirement for real-world AI application. AMD’s FPGA technology, combined with its chiplet architecture, provides a unique and scalable solution for powering AI inference at a fraction of the power consumption and cost of traditional GPUs.
The inference market is poised to be one of the largest growth areas in AI over the next decade, and AMD’s strategic investments in FPGA technology give it a clear path to dominate this space. With partnerships already in place with industry giants like Meta and SpaceX, AMD is well on its way to becoming the go-to provider of AI inference solutions.
As the world’s economy transitions into an inference machine, AMD stands to reap the rewards of its vision and technological innovation. It’s only a matter of time before the market realizes the true value of AMD’s AI capabilities. When that moment comes, the company’s stock will likely see exponential growth.
For long-term investors, the best is yet to come.
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc