On June 13, Advanced Micro Devices, or AMD, revealed new details about an artificial intelligence (AI) chip that could challenge market leader Nvidia.
AMD, a company based in California, has declared that its most advanced graphics processing unit (GPU) for AI, the M1300X, will start shipping in Q3 2023, and mass production will start in the fourth quarter.
AMD’s announcement represents the biggest challenge for Nvidia, which currently dominates the AI chip market with more than 80% market share. GPUs are chips used by companies like OpenAI to build cutting-edge AI programs like ChatGPT. They have parallel processing capabilities and are optimized to handle large amounts of data simultaneously, making them ideal for tasks that require efficient, high-speed graphics processing.
AMD has announced that its latest MI300X chip and CDNA architecture have been developed specifically to meet the demands of large and advanced AI models. With a maximum memory capacity of 192 GB, the M1300X accommodates even larger AI models than other chips like Nvidia’s H100, which has a maximum of 120 GB of memory.
AMD’s Infinite Architecture technology combines eight M1300X accelerators in a single system, mirroring similar systems from Nvidia and Google that integrate eight or more GPUs for AI applications.
During a presentation to investors and analysts in San Francisco, AMD CEO, Lisa Sue, stressed that AI represents the company’s “most significant and strategically important long-term growth opportunity.”
“We think the market for data center AI accelerators will grow from something like $30 billion this year, at over 50% CAGR, to over $150 billion by 2027.”
If developers and server makers embrace AMD’s AI “accelerator” chips as an alternative to Nvidia’s products, it could open up a significant untapped market for the chipmaker. AMD, known for its mainstream computer processors, could benefit from the potential shift in demand.
Although AMD hasn’t revealed any specific pricing details, this move could put downward pressure on Nvidia GPUs, including models like the H100, which can cost $30,000 or more. Falling GPU prices could help reduce overhead costs associated with running resource-intensive generative AI applications.
Clarification: The information and/or opinions expressed in this article do not necessarily represent the views or editorial line of Cointelegraph. The information presented here should not be taken as financial advice or investment recommendation. All investment and commercial movement involve risks and it is the responsibility of each person to do their due research before making an investment decision.
Investments in crypto assets are not regulated. They may not be suitable for retail investors and the entire amount invested may be lost. The services or products offered are not directed or accessible to investors in Spain.