Google Reveals AI Supercomputer is Faster and More Efficient Than Nvidia


Alphabet Inc’s Google recently announced that its AI supercomputer is faster, greener than Nvidia’s. This system utilizes a custom chip called the Tensor Processing Unit (TPU). It is used for more than 90% of the process of training artificial intelligence models. This fourth generation TPU has been linked to more than 4,000 chips and are connected to each other using Google’s own custom-built optical switches. This makes it easier to route around failed components, making training much more efficient. The system has already been used to train PaLM, Google’s largest language model, over a span of 50 days.

The company also boasted that its supercomputer is 1.7 times faster and 1.9 times more energy efficiency compared to a system that uses Nvidia’s A100 chip. Google has even hinted that it might be working on a new TPU to compete with the Nvidia H100. However, there is still no news about this.

Stephen Nellis is a journalist who wrote about this article for Reuters. He is based in San Francisco and his reporting covers technology and enterprise markets.He has won several awards for his reporting and was nominated for a Pulitzer Prize in 2019 for a series of stories about patents.His reporting has been published by a variety of news outlets, including Bloomberg, Bloomberg Law, the San Francisco Chronicle and USA Today.