At its developer conference yesterday, Google announced third-generation TPUs (Tensor Processing Units) for AI and machine learning, which are eight times more powerful than the Google TPU 2.0 pods with up to 100 petaflops in performance. They’re so power-hungry that they require water cooling — something previous TPUs haven’t required. ExtremeTech reports: So what do we know about TPU 3.0? Not much — but we can make a few educated guesses. According to Google’s own documentation, TPU 1.0 was built on a 28nm process node at TSMC, clocked at 700MHz, and consumed 40W of power. Each TPU PCB connected via PCIe 3.0 x16. TPU 2.0 made some significant changes. Unlike TPU v1, which could only handle 8-bit integer operations, Google added support for single-precision floats in TPU v2 and added 8GB of HBM memory to each TPU to improve performance. A TPU cluster consists of 180 TFLOPS of total computational power, 64GB of HBM memory, and 2,400GB/s of memory bandwidth in total (the last thrown in purely of the purposes of making PC enthusiasts moan with envy).
No word yet on other advanced capabilities of the processors, and they are supposedly still for Google’s own use, rather than wider adoption. Pichai claims TPU v3 can handle 100 PFLOPS, but that has to be the clustered variant, unless Google is also rolling out a new tentative project we’ll call “Google Stellar-Equivalent Thermal Density.” We would’ve expected to hear about it, if that was the case. As more companies flock to the AI / ML banner, expect to see more firms throwing their hats into this proverbial ring.
Powered by WPeMatico