Workshop on
Benchmarking Machine Learning Workloads on
Emerging Hardware

To be held with Fourth Conference on Machine Learning and Systems (MLSys)

Virtual conference

April 9, 2021

NEW : Accepted papers and presentations are available at here.

Program

8:00 AM - 8:10 AM Introduction - Tom St. John (Cruise)

8:10 AM - 8:50 AM "System-Level Design of Machine Learning Accelerators with the Open Source ESP Platform" - Luca Carloni (Columbia)

8:50 AM - 9:00 AM Morning Break 1

9:00 AM - 9:40 AM "Designing and Optimizing AI Systems for Deep Learning Recommendation and Beyond" - Carole-Jean Wu (Facebook)

9:40 AM - 10:00 AM Morning Break 2

10:00 AM - 10:30 AM "Being-Ahead: Benchmarking and Exploring Accelerators for Hardware-Efficient AI Deployment" - Xiaofan Zhang (UIUC), Hanchen Ye (UIUC), Deming Chen (UIUC)

10:30 AM - 10:50 AM "Benchmarking Machine Learning Inference in FPGA-Based Accelerated Space Applications" - Amir Raoofy (TU Munich), Gabriel Dax (TU Munich), Max Ghiglione (Airbus Defense and Space GmbH), Martin Langer (Orbital Oracle Technologies GmbH), Carsten Trinitis (TU Munich), Martin Werner (TU Munich), Martin Schulz (TU Munich)

10:50 AM - 11:30 AM "Machine Learning Tools: Skyline and RL-Scope" - Gennady Pekhimenko and James Gleeson (University of Toronto)

11:30 AM - 1:30 PM Lunch

1:30 PM - 2:10 PM "Benchmarking Scientific ML on Disaggregated Cognitive Simulation HPC Platforms", Brian Van Essen (LLNL)

2:10 PM - 2:50 PM "Challenges with Running DNN Workloads with Hardware Simulators" - David Kaeli (Northeastern University) and "GNNMark: a Benchmark for GNN Training" - Trinayan Baruah (Northeastern University)

2:50 PM - 3:20 PM Afternoon Break

3:20 PM - 4:50 PM Panel Session - Lizy John (UT Austin), Tushar Krishna (Georgia Tech), Peter Mattson (Google), Brian Van Essen (LLNL), Venkatram Vishwanath (ANL), Carole-Jean Wu (Facebook), David Kaeli (Northeastern)

4:50 PM - 5:00 PM Conclusion - Murali Emani (ANL)

About

With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications. Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption; (iv) performance modeling and projections to next-generation hardware. Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.

The program details from the workshop held last year can be found here.

Call for Papers

We solicit both full papers (8-10 pages) and short/position papers (4 page). Submissions are not double blind (author names must be included). The page limit includes figures, tables, and appendices, but excludes references. Please use standard LaTeX or Word ACM templates. All submissions will need to be made via EasyChair (submission website: here). Each submission will be reviewed by at least three reviewers from the program committee. Papers will be reviewed for novelty, quality, technical strength, and relevance to the workshop. All accepted papers will be published here.

Important dates


Submission Deadline: March 5, 2021
Acceptance Notification: March 15, 2021
Camera-ready version due: March 31, 2021
Workshop Date: April 9, 2021

All deadlines are at midnight anywhere on earth (AoE), and are firm.

Organization

Organizing Committee

  • Tom St. John, Cruise (tomstjohn617@gmail.com)

  • Murali Emani, Argonne National Laboratory (memani@anl.gov)

Program Committee

Colby Banbury (Harvard)
Yufei Ding (UC Santa Barbara)
Trevor Gale (Stanford)
Rong Ge (Clemson)
Amir Gholami (UC Berkeley)
Dimitris Gizopoulos (University of Athens)
Yingyan Lin (Rice University)
Connie Yingyu Miao (Google)
Ken Shiring (MediaTek)
Yu Emma Wang (Google)
Steve Farrell (NERSC,LBNL)
Bilge Acun (Facebook)
Prasanna Balaprakash (ANL)
Shuaiwen Song (The University of Sydney)
Armin Runge (Bosch)