Samsung AI chip is '8x more power efficient than Nvidia GPU'
Samsung Electronics and Naver Corporation have demonstrated the results of their year-long AI semiconductor project, and they are claiming huge gains in performance and efficiency.
The two firms formed a partnership in December 2022 to develop semiconductor solutions tailored for hyperscale artificial intelligence (AI) models. They planned to combine Samsung memory technologies such as computational storage, processing-in-memory (PIM) processing-near-memory (PNM) and Compute Express Link (CXL), with Naver's HyperCLOVA hyperscale language model.
The resulting AI semiconductor took the form of a Field-Programmable Gate Array (FPGA) and this week the two firms gave a progress report. They and revealed that the solution is eight times more power efficient than Nvidia's AI GPU, which is the dominant product in the market.
"Through our collaboration with Naver, we will develop cutting-edge semiconductor solutions to solve the memory bottleneck in large-scale AI systems," said Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics. "With tailored solutions that reflect the most pressing needs of AI service providers and users, we are committed to broadening our market-leading memory lineup including computational storage, PIM and more, to fully accommodate the ever-increasing scale of data."