Saturday, May 13, 2017

Microsoft Outlines Hardware Architecture for Deep Learning on Intel FPGAs

At Build, Microsoft’s annual developers conference, taking place this week, Microsoft Azure CTO Mark Russinovich disclosed major advances in Microsoft’s hyperscale deployment of Intel® field programmable gate arrays (FPGAs). These advances have resulted in the industry’s fastest public cloud network, and new technology for acceleration of Deep Neural Networks (DNNs) that replicate “thinking” in a manner that’s conceptually similar to that of the human brain. 

The advances offer performance, flexibility and scale, using super low latency networking to leverage the world’s largest cloud investment in FPGAs. The increases in networking speed achieved by this low latency networking will help business, government, healthcare, and universities better process Big Data workloads. Azure’s FPGA-based Accelerated Networking reduces inter-virtual machine latency by up to 10x while freeing the Intel® Xeon® processors for other tasks.