Google jumped into the complicated and very expensive business of building its own chips to give itself an edge in artificial intelligence. The chip, called the Tensor Processing Unit, is optimized to run its deep learning algorithms. And to give it a leg up in the its nascent cloud business, the tech giant will start making the hardware available to outsiders who went with Google Cloud.
Now, Microsoft is announcing Project Brainwave to get deep learning applications running fast and efficient across its sprawling data centers.
Microsoft is taking a slightly different approach than Google. Instead of building a chip optimized for a very specific set of algorithms, Microsoft is using a type of chip called field-programmable gate arrays (or FPGAs), which can be reprogrammed after manufacturing.
The FPGAs, built by Intel-owned Altera, give the company more flexibility than a dedicated chip (like the one Google has), said Microsoft distinguished engineer Doug Burger. And flexibility is important when the latest advancements in deep learning are emerging still at an almost monthly (or weekly) pace.
“We wanted to build something bigger, more disruptive and more general than a point solution,” Burger said in an interview at the Hot Chips conference, where Project Brainwave was announced.
Also unlike Google, Microsoft will support multiple deep learning frameworks, including Microsoft’s CNTK, Google’s TensorFlow and Facebook’s Caffe2.
While FPGAs are flexible for different kinds of applications, they’re not known for great performance. But Burger said Microsoft has tweaked the FPGAs enough to make them competitive with (and sometimes better than) the dedicated chips. Running a deep learning model called gated recurrent units, Microsoft’s hardware demonstrated nearly 40 teraflops (a unit of computing speed for floating point operations) of performance and ran each operation in under one millisecond.
Project Brainwave is focused on what Microsoft calls real-time AI, for running these deep learning algorithms, not training. Training in deep learning is a more compute-intensive process that requires running the algorithms through massive amounts of data.
Project Brainwave will be only available for Microsoft’s internal AI services for now, but will potentially offer the system to outside companies through its cloud services in the future.
Microsoft has been investing in FGPAs for years now — initially for areas like security and Bing search. The company said it has the largest installations of FPGAs in the world. A year go, the Project Brainwave team was tasked with taking that existing FPGA infrastructure and making it work well with the latest deep learning advances.
“We’ve already deployed FPGAs at a massive scale,” Burger said. “So, if you think about, the technology to run deep learning is already deployed worldwide with Azure.”
Deep learning is branch of machine learning that has started taking off across the tech industry. It’s making big advancements in areas like speech recognition, image vision and translation, among others. The underlying hardware infrastructure is now changing to accommodate this shift. Nvidia’s graphics processors are in a huge lead in this still-early market, and the company’s stock is reaping the benefits with its shares nearly tripling over the past year. But a number of new chips from the likes of Google as well as a slew of startups are emerging.