AMD: Disrupting Nvidia, Part 2
Q3 2023 ER Update
1.0 AMD´s GPUs are Ready to Compete
Recent evidence suggests AMD´s GPUs are already competitive in the marketplace. Management´s remarks during the recent Q3 conference call corroborate.
Almost ten years into my journey as an AMD shareholder, I continue to be more than pleased with the company´s evolution; my return since first investing in 2014 is 2,700%. Still, I believe the company to be severely undervalued at present. In Q3 we began to see AMD´s new product roadmap gain traction and position the company for continued non-linear growth over the next decade.
AI is quickly evolving into the world´s new computing platform. AMD is primed to take full advantage, repositioning as an AI-first organization. In my AMD deep dive, I explain why the company has a structural advantage over its peers and is indeed set to thrive as AI goes mainstream.
AMD has mastered chiplets over the last decade, which:
Boast much higher yields and therefore cost less than monolithic chips.
Match the computational power and efficiency of monolithic chips.
AMD´s rise to prominence over the last decade is the result of leveraging chiplets to disrupt Intel in the CPU space. As I explain in the deep dive, it is now employing the same strategy to disrupt Nvidia´s dominance of the GPU space.
GPUs train and make inferences (i.e. predictions) with AI models. As AI evolves over the coming decades, the GPU market will grow exponentially–and AMD with it.
If AMD’s new GPUs are competitive, not only will the company benefit from increased Datacenter sales, but also its ability to infuse each business segment with AI capabilities, driving growth on the top line and bottom lines, along with improved margins.
On the Q3 conference call, management claims to have made “significant progress” in the Datacenter GPU business, with “significant customer traction” for the next generation MI300 chip. Additionally–and in line with previous guidance–Lisa Su said on the call that AMD Datacenter GPU revenue will be:
$400M in Q4 2023, implying a 50% QoQ growth of the Datacenter business.
Over $2B in FY2024.
$2B in FY2024 is a fraction of what Nvidia expects to sell during the same period. However, it’s a solid first step in AMD´s journey toward gaining GPU market share.
Abhi Venigalla, MosaicML, offers a very interesting source of alternative data. Some months ago he shared research proving how easy it is to train an LLM (large language model) using AMD Instinct GPUs via Pytorch. He claims that, since the release of his work, community adoption of AMD GPUs has “exploded”.
[…] we further expanded our AI software ecosystem and made great progress enhancing the performance and features of our ROCm software in the quarter.
- Lisa Su, AMD CEO during the Q3 2023 conference call.
From Abhi´s new research, a few things stand out:
Training the same LLM on the same piece of hardware is 1.13X faster on ROCm 5.7 than on ROCm 5.4. I already knew AMD had a fast optimization pace on the hardware side, but this indicates that the company is beginning to operate similarly on the software side.
Note: ROCm is the equivalent to Nvidia´s CUDA.
Comparing AMD´s MI250 against the same generation Nvidia A100, the two computing units perform similarly when training the same LLM. When comparing the former with the H100-80G, which has much larger memory, the latter performs much better. You can visualize the performance deltas in the graph below.
In a post from back in May I explain why LLMs require hardware architecture that dis-aggregates memory from compute. Essentially, LLMs are large, and, in order to make rapid inferences, you need the LLM in question nearby the actual computing engine–in fact, it needs to fit in the memory on-chip. Incidentally, to train an LLM you also need to make inferences with it.
A chip with little memory will not be able to host an LLM on-chip and will actually require the model to be hosted across a number of chips. This disproportionately increases latency (time taken for information to move between memory and compute), which slows down inference and, ultimately, decreases performance.
The fundamental difference between Nvidia´s A100-40GB and its A100-80GB is that the latter has more memory. The respective bandwidths are 1.555GBs and 2.039GBs. Therefore, the A100-80GB´s communication between the compute engine and the memory faster, thus making inference faster, and so forth.
In the graph above, the performance delta between the A100-40GB and the A100-80GB reveals that doubling the memory more than doubles the teraflops per second per GPU during the training process.
The memory of AMD´s new MI300 chipset-based GPU is 128GB. Given how much better the performance of the A100-80GB is compared to the A100-40GB, I suspect that the increased memory of the MI300 alone will make the chip competitive.
Abhi´s research certainly matches with Lisa Su´s comments during the Q3 conference call:
[…] validation of our MI300A and MI300X accelerators continue progressing to plan with performance now meeting or exceeding our expectations.
Naturally, this positions the two companies in a rat race. I believe the longer term will reveal the advantage in yields that chiplets confer. Q4 will be pivotal for AMD, as its MI300 GPU begins to ship.
Keep reading with a 7-day free trial
Subscribe to Investment Ideas by Antonio to keep reading this post and get 7 days of free access to the full post archives.