workshop computer architecture for machine learning - ICRI-CI

7 downloads 135 Views 1MB Size Report
Jun 14, 2015 - COMPUTER ARCHITECTURE FOR ... Machine Learning Workloads Analysis. 11:30-12:00 ... Accelerators for Machi
COMPUTER ARCHITECTURE FOR MACHINE LEARNING ISCA-42

Portland OR, USA

June 14, 2015

Organizers: Boris Ginsburg, Ronny Ronen - Intel Labs Olivier Temam - Google

Program Time

Speaker

Title

09:00-9:20

Boris Ginsburg, Intel

Opening Remarks

Hardware Acceleration for Deep Learning 09:20-9:45

Amir Khosrowshahi, Nervana Systems 9:45-10:10 Eric Chung, Microsoft Research 10:10-10:35 Vinayak Gokhale, Purdue University 10:35-11:00 Paul Burchard Goldman Sachs

Computer Architectures for Deep Learning Accelerating Deep Convolutional Neural Networks Using Specialized Hardware in the Datacenter A Hardware Accelerator for Convolutional Neural Networks Hardware Acceleration for Communication-Intensive Algorithms

11:00-11:30 COFFE

Machine Learning Workloads Analysis 11:30-12:00 Jonathan Pearce, Intel Labs 12:00-12:30 Scott Beamer, UC Berkeley, 12:30–13:30 LUNCH

You Have No (Predictive) Power Here, SPEC! Graph Processing Bottlenecks

Program Time

Speaker

Title Neuromorphic Engineering

13:30-14:00 James E. Smith Wisconsin–Madison 14:00-14:30 Giacomo Indiveri, Univ. of Zurich and ETH Zurich 14:30-15:00 Yiran Chen, Univ. of Pittsburgh 15:00-15:30 Mikko Lipasti, Wisconsin – Madison

Biologically Plausible Spiking Neural Networks Neuromorphic circuits for building autonomous cognitive systems Hardware Acceleration for Neuromorphic Computing: An Evolving View Mimicking the Self-Organizing Properties of the Visual Cortex

15:30-16:00 COFFE

Hardware Acceleration for Machine Learning 16:00-16:30 Shai Fine, Intel 16:30-17:00 Chunkun Bo, University of Virginia 17:00-17:30 Ran Ginosar, Technion

Machine Learning Building Blocks String Kernel Testing Acceleration using the Micron Automata Processor Accelerators for Machine Learning of Big Data

Paradigm Shift: From Computers to Learning Machines We are in the very beginning of new computer era

Traditional Computer Architecture 1946 - ENIAC Electronic Numerical Integrator And Computer, designed by J. P. Eckert and J. Mauchly

First Programming Model ENIAC was programmed by setting switches and inserting patch leads to route data and to control signals between various functional units

Von Neumann Architecture EDVAC - stored-program computer: • CPU with ALU , Control unit, and registers • Memory to store data and program • Synchronous

1948-2015: Same Old Programming Model

Programs = Algorithms + Data Structures • Algorithms are translated into linear sequence of instructions, which are executed synchronously • Data Structures are mapped to Linear Memory

Paradigm Shift: From Formulas and Algorithms to Machine Learning

Machine Learning – breakthrough in computer vision, natural language processing, speech recognition, robotics, self-driving cars,…

Machine Learning Building Blocks Machine Learning Convolutional NN

GMM

HMM

Recurrent NN

SVM

DBN

Auto-Encoders

kNN

MLP

Dense / Sparse Matrix

Graph Algorithms

?

Dense / Sparse BLAS & FFT

“Think like a Vertex”

?

Graph Accelerator

?

Matrix Co-processor 10

Deep Learning – Just First Step… New way to develop application: • explicit algorithms and formulas • multi-layer neural networks trained on large data

Paradigm Shift: Neuromorphic Engineering Traditional Computers • • • • •

Linear execution model Flat linear memory model Synchronous Numerically precise Reliable

Neuromorphic • • • • •

Parallel execution Compute-in-memory Asynchronous Probability computing unreliable elements

What it takes to make this paradigm shift? A lot of things… Including: • Machine Learning Workloads Analysis • Hardware Acceleration for Machine Learning • Hardware Acceleration for Deep Learning • Neuromorphic Engineering

In short – our program…

Suggest Documents