Deep learning is coming to a chip near you


Deep learning has become one of the most relevant trends in modern software technology. From a conceptual standpoint, deep learning is a discipline of machine learning that focuses on modeling data using connected graphs with multiple processing layers. In the last few years, deep learning has become a pivotal technology to power use cases such as image recognition, natural language processing, or even powering some of the capabilities of self-driving vehicles. The popularity of deep learning has expanded beyond just software and now the industry is starting to talk about the first generation of hardware with deep learning capabilities: a deep learning chip.

A few months ago, at its I/O conference, Google announced the design of an application-specific integrated circuit (ASIC) focused on deep learning capabilities and neural nets. Google called this chip the Tensor Processing Unit (TPU) because it underpins TensorFlow, Google’s open source deep learning framework. While Google’s TPU is not the first industry attempt to create a deep learning chip it is certainly the most famous one. However, is a deep learning chip a good idea?

The answer is related to the current time in the evolution of deep learning technologies. While transferring deep learning capabilities ontohardware is certainly a great concept, there are some doubts whether this is the right time in the evolution of deep learning technologies to pursue such an endeavor. Looking beyond the hype, we can identify solid arguments both in favor and against the creation of a deep learning chip at this moment in the industry.

Read the source article at