Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
© cerebras
Business |

Cerebras Systems launches 'the world’s fastest AI chip'

AI specialist Cerebras has unveiled Wafer Scale Engine 3 – a chip featuring 4 trillion transistors and 900,000 AI cores, which is purpose built for training the industry’s largest models.

The WSE-3 is built on a 5 nm process, and powers the Cerebras CS-3 AI supercomputer, which is capable of 125 petaflops of peak AI performance. The new chip can train large AI models of up to 24 trillion parameters without the need for partitioning, thus simplifying the training process.

US-based Cerebras says this is the world’s fastest AI chip'. Its key specs are as follows:

  • 4 trillion transistors
  • 900,000 AI cores
  • 125 petaflops of peak AI performance
  • 44GB on-chip SRAM
  • 5nm TSMC process
  • External memory: 1.5TB, 12TB, or 1.2PB
  • Trains AI models up to 24 trillion parameters
  • Cluster size of up to 2048 CS-3 systems

To make the chip user friendly, it supports PyTorch 2.0, which simplifies programming large language models (LLMs). This means developers can do more with less code, speeding up the time it takes to develop new applications.

“When we started on this journey eight years ago, everyone said wafer-scale processors were a pipe dream. We could not be more proud to be introducing the third-generation of our groundbreaking water scale AI chip,” said Andrew Feldman, CEO and co-founder of Cerebras. “WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI work, from mixture of experts to 24 trillion parameter models. We are thrilled for bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.”
 
 


Ad
Ad
Load more news
April 26 2024 9:38 am V22.4.33-1
Ad
Ad