• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Codemotion Magazine

Codemotion Magazine

We code the future. Together

  • Magazine
  • Dev Hub
    • Community Manager
    • CTO
    • DevOps Engineer
    • Backend Developer
    • Frontend Developer
    • Web Developer
    • Mobile Developer
    • Game Developer
    • Machine Learning Developer
    • Blockchain Developer
    • Designer – CXO
    • Big Data Analyst
    • Security Manager
    • Cloud Manager
  • Articles
    • Stories
    • Events
  • Sign In
Home » Dev Hub » Machine Learning Developer » Tensor Processing Units: enabling the next generation of fast, affordable AI
Machine Learning Developer

Tensor Processing Units: enabling the next generation of fast, affordable AI

Codemotion and Facebook organized the Tech Leadership Training boot camp, heres a personal reportage from one of our attendees.

Last update February 25, 2019 by Luca Ferretti

Legend says it was written on the back of a napkin. In 2013 Jeff Dean, Google‘s Head of AI, did some calculations and realized that if all the Android users in the world used their smartphone speech to text feature for one minute each day, they would consume more of the all compute resource than all Google data centres around the world (at that time).

Part of the reason of this situation was and is related to the evolution of computer processors and chips (Moore’s law) as well as to the exponential growth of use cases, devices and connectivity.

From here emerges the need in the present day of more specialised hardware, domain specific hardware, whether it’s related photo recognition via AI or query processing in big data land.

Google‘s TPUs are domain specific hardware for machine learning, a project started in 2013, first deployed in 2015 with TPU v1. Yufeng Guo, Developer Advocate at Google, told at Codemotion Milan 2018 about the characteristics and evolution of TPUs and how this product represents a real accelerator for machine learning.

Early days of TPU v1

The v1 is still in use, but was not released publicly. It’s basically a PCI Express slot, suited for existing hardware datacenter infrastructure, and was used for Search (search ranking, speech recognition), Translate (text, graphic and speech) and Photo (photo search) – i.e. Google products performing large ML task with a lot of users at that time.

It is an early stage product; for example it only makes predictions (a.k.a. inference), not training. However, it was the foundation of a different kind of hardware for machine learning.

In fact, although reading the specifications could be that the TPU v1 was equipped with a slow chip if compared with other available processors, the performance in terms of calculation and watts consumed tell a different story.

The starting point for the chip and board implementation is the fact that the neural networks are basically a series of multiplication and addition operations. Hence two insights: on the one hand the fact that it is possible to operate these operations by matrix calculation, on the other the fact that neural networks allow some kind of fuzzy math.

In realizing the v1 chip, the choice was to invest in a processor capable of running quickly on matrices and to reduce precision through quantization, operating on 8-bit integers instead of 32-point floating points. Moreover, operating on matrices of data – systolic arrays – reduces the reading and storing data cycles. These design choices allow, among other things, to have 25 times the number of multipliers – compared to other similar processors – in a smaller chip, with less heat produced, less silicon need.

So while the v1 runs at 700Mhz and has no threading, no multi-processing, no branch prediction, it was able to process a very large amount of data per second.

Evolving in v2 and v3 for learning

Google teams quickly iterated over it and made TPU v2 in 2017. It was a bigger board, providing four processors instead of just one, with big heatsink, and loaded into dedicated infrastructure and hardware for holding in datacentre.

TPU v2 was designed to perform both inference and learning tasks. The computations necessary for learning have different requirements than those for prediction; more precision is needed (therefore a less aggressive quantisation of data), maintaining and possibly improving the speed and space required.

So, how can you have good performance without losing the necessary precision that means the training doesn’t fall apart? It’s simple: invent a new float type, bfloat16. bfloat or brain float, allows within 16 bit the same full range of values of standard float 32 (from 1E-38 to 1E+38), by dropping the decimals.

In 2018 Google released TPU v3, with improvement on heat sink system and, of course, in performance.

Use a TPU Pod

TPU boards are used in a distributed system known as TPU Pod, that hosts 64 TPUs. All those TPUs are connected to their own dedicated host: no CPU, no main memory, no disk.

This configuration allows to use a full Pod or a subsection of a Pod as one machine. Your code for one TPU is essentially the same as for all 64 TPUs in a Pod. Dedicated compilers will take care of it. The advantage of this configuration is that you can choose how much per hour paid resources you want to allocate, reducing or increasing the time required to perform the learning phase. A learning task that could require about 10 hours running on a single TPU, goes down to 30 minutes if using half a POD.

Programming for TPU with TensorFlow framework was also made simpler by Google. The same codebase and the same API can be used when running the code on different targets (local workstation, TPU, TPU Pod), just change to the specialised estimator for TPU.

TPUs can be used effectively when there is a the need for tons of matrix operations or large models with large batch sizes. For instance, they are really good at processing images data, but you can take you own your non-image data and turn them into formats, structures that are more suitable for pushing through the TPUs.

The high optimisation of operations on TPU matrices also means that to have the best performance, it is useful to consider the actual size of the matrix in the chip. Having to process a batch of images, all of a certain size in pixels except one that it’s just one pixel bigger, it would be more efficient to cut away that extra pixel before processing the data from the TPUs.

While it’s not like using microcode for programming chips, it’s also true that this kind of domain specific hardware could require a knowledge of how it works internally to get the best from it.

Tagged as:Codemotion Milan

The Deep Learning Revolution: interview with Christian Heilmann
Previous Post
Professional Programmer
Next Post

Primary Sidebar

Whitepaper & Checklist: How to Organise an Online Tech Conference

To help community managers and companies like ours overcome the Covid-19 emergency we have decided to share our experience organizing our first large virtual conference. Learn how to organise your first online event thanks to our success story – and mistakes!

DOWNLOAD

Latest

Decoding Adaptive Vs. Responsive Web Design

Decoding Adaptive Vs. Responsive Web Design

Web Developer

How to Implement Data Version Control and Improve Machine Learning Outcomes

How to Implement Data Version Control and Improve Machine Learning Outcomes

Machine Learning Developer

7 Ways to Use UX Design to Enhance User Data Security

7 Ways to Use UX Design to Enhance User Data Security

Security Manager

What are the Main Areas of Development for Programmers to Land Their Dream Job? Codemotion

What are the Main Areas of Development for Programmers to Land Their Dream Job?

Backend Developer

How to Contribute to an Open-Source Project

How to Contribute to an Open-Source Project

Backend Developer

Related articles

  • How to Implement Data Version Control and Improve Machine Learning Outcomes
  • The State of AI in 2021
  • The Rise of Machine Learning at the Network Edge
  • The Future of Machine Learning at the Edge
  • AI Ladder: the IBM Approach to Artificial Intelligence
  • Questions and Answers in Virtual Assistants
  • Voice Control: Building Your Voice Assistant
  • Seeing Is Believing: Image Recognition on a €10 MCU
  • Exploring LIME Explanations and the Mathematics Behind it
  • ML at the Edge: a Practical Example

Subscribe to our platform

Subscribe

Share and learn. Launch and grow your Dev Community. Join thousands of developers like you and code the future. Together.

Footer

  • Learning
  • Magazine
  • Community
  • Events
  • Kids
  • How to use our platform
  • About Codemotion Magazine
  • Contact us
  • Become a contributor
  • How to become a CTO
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

DOWNLOAD APP

© Copyright Codemotion srl Via Marsala, 29/H, 00185 Roma P.IVA 12392791005 | Privacy policy | Terms and conditions

  • Learning
  • Magazine
  • Community
  • Events
  • Kids
  • How to use our platform
  • About Codemotion Magazine
  • Contact us
  • Become a contributor
  • How to become a CTO
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

DOWNLOAD APP

CONFERENCE CHECK-IN