Nvidia stands out as the leading provider in artificial intelligence (AI) infrastructure, and there’s a compelling explanation for this. Primarily, Nvidia’s graphics processing units (GPUs) are extensively used for training large language models (LLMs), making them indispensable components. Furthermore, Nvidia’s CUDA software platform and NVLink interconnect system enable their GPUs to function as a unified chip, creating a significant advantage that resembles a protective barrier, or a ‘wide moat’.
Nvidia has now emerged as the globe’s largest company, boasting a market value surpassing an astounding $4 trillion. In the second quarter alone, it dominated the GPU market with an astonishing 94% share and witnessed its data center revenue surge by an impressive 56%, reaching $41.1 billion. Despite this remarkable growth, there might be potential investment opportunities within the sector due to such substantial figures.
As an investor, I find myself drawn to two companies that warrant a closer inspection: Advanced Micro Devices (AMD) and Broadcom (AVGO). Both of these smaller players are making strides in the AI chip sector. With the market’s predicted shift from training to inference, their positions become even more strategic.
In essence, while giants like cloud computing and hyperscaler companies appreciate Nvidia’s GPUs, they yearn for additional alternatives to mitigate costs and enrich supply chain diversity.
1. AMD
In comparison to Nvidia, Advanced Micro Devices (AMD) currently holds a second position in the graphics processing unit (GPU) market. Yet, as we move towards the field of inference, AMD is expected to gain an edge. It’s true that Nvidia excels in training, particularly with its CUDA technology, which provides a significant advantage. However, the growth in demand lies predominantly in inference, and AMD has already managed to capture a portion of this market by attracting customers.
AMD has announced that one of the leading AI model service providers globally is running its graphics processing units (GPUs) for a significant part of their daily inference tasks, and seven out of the top ten AI model companies rely on AMD GPUs. This fact is crucial because inference isn’t a single occurrence like training – each time someone asks an AI model a question or receives a recommendation, it’s the GPUs that supply the power to produce the answers. Hence, cost efficiency takes precedence over raw maximum performance.
As an experienced investor, I firmly believe that Advanced Micro Devices (AMD) has a solid opportunity to capture some market share. While training requires extensive libraries and tools, inference doesn’t necessitate the same resources. Fortunately, AMD’s ROCm software platform is more than competent to manage these lighter workloads.
When performance between competing products becomes similar, cost often becomes the primary factor influencing decisions. And with AMD offering competitive pricing, it positions itself as an attractive choice for many in the market.
AMD doesn’t necessarily need to make significant inroads into Nvidia’s market share to produce noticeable results, as their latest quarterly data center revenue was $41.1 billion compared to AMD’s $3.2 billion. Even minor victories can lead to substantial impacts when starting from a much smaller base than the dominant market leader.
Beyond this, AMD has collaborated in establishing the UALink Consortium alongside Broadcom and Intel, aiming to develop an open interconnect standard as a rival to Nvidia’s exclusive NVLink. If this endeavor proves fruitful, it would potentially eliminate one of Nvidia’s key advantages, enabling clients to construct data center clusters using chips from various manufacturers, thereby leveling the competitive landscape over time. This is a long-term strategy, but it could lead to a more balanced playing field in the future.
As the role of inference (or use) grows more significant than that of training over time, Advanced Micro Devices (AMD) doesn’t necessarily have to outperform Nvidia to generate substantial profits. Instead, they merely need a slightly larger market share.
2. Broadcom
Broadcom is approaching the AI market from a unique perspective, and the potential rewards could be even greater. Rather than manufacturing pre-made AI GPUs for customers, Broadcom is assisting clients in creating their customized AI chip solutions instead.
Broadcom is a pioneer in crafting custom application-specific integrated circuits (ASICs), and they’ve put this skill into action by manufacturing AI chips. Their first client was Alphabet, whom they assisted in designing the highly successful Tensor Processing Units (TPUs) that significantly contribute to Google Cloud’s performance. This achievement paved the way for other design collaborations, such as with Meta Platforms and TikTok’s parent company, ByteDance. Together, these three clients represent a market potential of between $60 billion and $90 billion by Broadcom’s fiscal 2027 (ending October 2027).
The news continued to improve when it was disclosed that another client, often presumed to be OpenAI, had put in an order worth $10 billion for next year. Creating ASICs is usually not a swift undertaking; Alphabet’s TPUs took approximately 18 months from conception to completion, which at the time was considered rapid. However, this recent deal demonstrates that they can maintain this pace. This development also suggests positive prospects for future deals, as it was made known toward the end of last year that Apple will be a fifth customer.
Personalized chips offer numerous benefits for inference tasks. Being tailored to particular workloads, they provide improved energy efficiency and reduced expenses compared to generic graphics processors (GPUs) that are available off-the-shelf. As the demand for inference continues to surpass training, Broadcom’s role as a preferred design partner becomes increasingly significant.
Moving ahead, it’s important to note that creating custom chips comes with significant initial design expenses, making them not suitable for everyone. However, this presents a considerable growth opportunity for Broadcom in the future.
The bottom line
Nvidia remains the leading force in artificial intelligence (AI) infrastructure, a position I believe will persist for some time. Yet, companies like AMD and Broadcom present significant opportunities and are currently operating on smaller scales. This could potentially lead to these companies surpassing expectations in the near future.
Read More
- Gold Rate Forecast
- QNT PREDICTION. QNT cryptocurrency
- DC Comics Cancels Gretchen Felker-Martin’s Red Hood After One Issue Following Charlie Kirk Comments
- Are Katy Perry and Justin Trudeau Dating? Montreal Dinner and Park Stroll Spark Romance Rumors
- Why GE Vernova Stock Popped Today
- NEXO PREDICTION. NEXO cryptocurrency
- I’ve played 100s of hours of Soulslikes, and I think Hollow Knight Silksong is harder than Elden Ring – but what makes games difficult anyway?
- Every promo code from July 2025’s Pokémon Presents
- Ethereum Whale’s Epic Nap: A Tale of $254 Million and Market Whims
- Broadcom’s Surprising Ascent: A Corporate Maze of AI and ASICs
2025-09-13 00:51