Technology

AMD debuts AMD Instinct MI350 Series accelerator chips with 35X better inferencing

AMD unveiled the vision of the AI’s comprehensive to -end integrated platform and presented the open and developed infrastructure on the rack on industry standards at the annual AI event.

Santa Clara chips, California, has announced the new AMD Gressing Mi350 series, which is four times faster at the expense of artificial intelligence and the fastest 35 times in the inference of previous chips.

AMD display and its AMD -based AMD products and the constant growth of the AMD Rocm ecosystem. It also showed its strong, new and open designs across the shelf and the road map that brings the performance of the driving shelf from artificial intelligence after 2027.

“We can now say that we are at the point of turning off, and they will be the driver,” said Lisa Soo, CEO of AMD.

In conclusion, in a blow in NVIDIA, she said: “The future of artificial intelligence will not be built by any one company or within a closed system. It will be formed through open cooperation throughout the industry with every person who brings their ideas.”

Lisa Su, CEO of AMD, in the progress of artificial intelligence.

AMD unveiled the MI350 instinctive graphics units, set a new standard for performance, efficiency, and expansion of artificial intelligence and high performance computing. The MI350 series, which consists of both GPUS and instinctive Mi350X and Mi355X, provides an increase in AI account of four times artificial intelligence, generations leaf 35 times in inferring, paving the way for AI transformational solutions across industries.

“We are very excited about the work you are doing at AMD,” said Sam Al -Tamman, CEO of Open AI, on stage with Lisa Soo.

He said he could not believe this when he heard about the specifications of the Mi350 from AMD, and he was grateful because AMD received his company’s comments.

Amd said that the latest GPUS instinct can overcome NVIDIA chips.

AMD infrastructure from end to end to end to end to end, AMD series Mi350 series, Gen AMD EPYC processors and AMD Pensando Pollara (NEND) AI rafts is called Helius.

It will be built on the next generation GPUS from Series Mi400, CPUS AMD EPYC Venice, based in 6 AMD and AMD Pensando Vulcano Nics.

“I think they are targeting a different kind of customers from NVIDIA,” said Ben Bagarin, a creativity analyst, in a letter to Gamesbeat. “Specifically, I think they see NeoCloud’s opportunity and a full -class host of the second and three -layer withdrawals and publication operations for the local institution.”

“We are optimistic about the shift to the full rack spread systems and this is the place that Helius suits the time of Robin. But with the market move to reasoning, which is only what we want, AMD is in a good position to compete for the class. Watch, which returns back to who is the appropriate customer for AMD and may be a customer profile completely different from the customer for NVIDIA.”

The latest version of the AI, ROCM 7 open source, ROCM 7, is designed to meet the increasing requirements of the General Spain Organization and high-performance computing work burden-with significant improving developers experience in all areas. (Radeon Open Compute is an open source software platform that allows computing that is accelerated on GPU on AMD graphics processing units, especially for high -performance computing and artificial intelligence work burden). ROCM 7 features a good support for the standard frameworks for industry, the compatibility of expanded devices, new development tools, and applications and libraries programming facades to accelerate the development of artificial intelligence and publishing.

In the keynote, Su said, “Openness should be more than just the word ton.”

The MI350 instinct has exceeded the AMD goal for five years to improve energy efficiency in artificial intelligence training and a 30 -time HD computing contract, eventually improved 38 times. AMD also unveiled a new goal for 2030 to provide a 20 -time increase in energy efficiency of the rack of 2024, allowing the model AI model that today requires more than 275 shelves in less than a fully used rack by 2030, using 1 95 % of electricity.

AMD has also announced its wide availability of AMD Developer Cloud for the global developer and open source communities. Users can be designed for this purpose for the high and high-performance AI development, and users will be able to a fully managed cloud environment with tools and flexibility to start artificial intelligence projects-and growth without limits. With Rocm 7 and AMD Developer Cloud, AMD reduces barriers and expands access to the next generation account. Strategic cooperation with leaders such as Hugging Face, Openai and Grok is established by the power of advanced open solutions. This announcement obtained some chants from people in the public, as the company said it will give the attendees developed credits.

The AI’s widely backed ecosystem displays the AMD progress

AMD’s Rocm 7

AMD customers have discussed how they use AMD AI solutions to train the leading AI models today, widespread energy conclusion and accelerate exploring and developing artificial intelligence.

Meta detailed how it benefited from multiple generations from the solutions of the AMD instinct and EPYC via infrastructure of the data center, with the instinctive MI300x publishing widely published by the conclusion of the Lama 3 and Lama 4.

Oracle Cloud’s infrastructure is among the first industrial leaders to adopt the AMD Open Rack Scale with GPU AMD Instinct Mi355X. OCI reinforces AMD CPU and GPU to provide a balanced and developmental performance for artificial intelligence groups, and announced that it will provide AI Zettascale oils that have been accelerated by the latest AMD instinct with up to 131,072 Mi355X to enable customers to build, train and tract AI on a large scale.

Amd says that instinctive graphics processing units have more efficient than NVIDIA.

Microsoft has announced the MI300X instinct now runs to run both ownership and open source models on Azure.

Humain discussed a historical agreement with AMD to build an open, developmental, flexible and effective infrastructure to take advantage of the full spectrum of computing platforms that can only be provided. Whatever place that driving models with high performance and high efficiency and buildings are published on the benefit of the high level of operation.

In the keyword, Red HAT described how its expanded cooperation with AMD AI is ready for production, with GPUS GPUS AMD on Red Hat OpenSHIFT AI that offers strong and effective AI processing across mixed cloud environments.

“They can get the maximum benefit from the devices they use,” said Red Hat Exec on the stage.

ASTERA LABS highlighted how the Ualink open ecosystem provides innovation and provides greater value to customers and joint plans to provide a comprehensive set of Ualink products to support the infrastructure of artificial intelligence from the next generation.

Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-06-12 18:30:00

Related Articles

Back to top button