[MLBench'23] Fourth Workshop on
Benchmarking Machine Learning Workloads on
Emerging Hardware

To be held with Sixth Conference on Machine Learning and Systems (MLSys) on June 8, 2023

Miami, FL, USA

Program

8:00 AM - 8:10 AM: Introduction

8:10 AM - 8:55 AM: Invited Talk - "Towards More Efficient Vision Transformers: From Novel Few-Shot Parameter-Efficient Tuning to New Linear-Angular Attention", Yingyan (Celine) Lin, Georgia Tech

8:55 AM - 9:20 AM: "Understanding Time Variations of DNN Inference in Autonomous Driving" - Liangkai Liu (Wayne State University), Yanzhi Wang (Northeastern University, Weisong Shi (University of Delaware)

9:20 AM - 9:45 AM: "Chakra: Advancing Performance Benchmarking and Co-Design Using Standardized Execution Traces" - Srinivas Sridharan (Meta), Taekyung Heo (Georgia Tech), Louis Feng, Zhaodong Wang, Matt Bergeron, Wenyin Fu, Shengbao Zheng, Brian Coutinho (Meta), Saeed Rashidi, Changhai Man, Tushar Krishna (Georgia Tech)

9:45 AM - 10:10 AM: "Performance Analysis of Binary Neural Networks Deployed in NVM Crossbar Architectures" - Ruirong Huang, Zichao Yue, Caroline Huang (Cornell), Janarbek Matai (Qualcomm), Zhiru Zhang (Cornell)

10:10 AM - 10:30 AM: Morning Break

10:30 AM - 11:15 AM: Invited Talk - "ML Workloads in AR/VR and Their Implication to ML System Design", Hyoukjun Kwon (UC Irvine)

11:15 AM - 12:00 PM: Invited Talk - "Benchmarks for Developing Generalist Agents: Closing the Gap in Real-World Autonomous Decision-Making" - Vijay Janapa Reddi (Harvard)

12:00 PM - 1:30 PM: Lunch

1:30 PM - 2:15 PM: Invited Talk - "Towards AI Workflow Benchmarking with Consideration of Energy Efficiency on Leadership Computing Platforms", Wes Brewer (Oak Ridge National Laboratory)

2:15 PM - 3:00 PM: Invited Talk - "Accelerating LLMs with Speculative Inference and Token Tree Verification", Zhihao Jia (Carnegie Mellon University)

3:00 PM - 3:30 PM: Afternoon Break

3:30 PM - 4:30 PM: Panel Session - Wes Brewer (ORNL), Zhihao Jia (CMU), Hyoukjun Kwon (UCI), Yingyan (Celine) Lin (GA Tech), Vijay Reddi (Harvard)

4:30 PM - 4:40 PM: Conclusion

About

With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications.

Key problems that we seek to address are:
(i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences;
(ii) how to characterize the ML workloads based on their interaction with hardware;
(iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption;
(iv) performance modeling and projections to next-generation hardware.

Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.

The program details from the previous workshops held are at MLSys'22, MLSys'21 and MLSys'20.

Call for Papers

We solicit both full papers (8-10 pages) and short/position papers (4-6 pages). Submissions are not double blind (author names must be included). The page limit includes figures, tables, and appendices, but excludes references. Please use standard LaTeX or Word ACM templates. All submissions will need to be made via EasyChair (submission website: here). Each submission will be reviewed by at least three reviewers from the program committee. Papers will be reviewed for novelty, quality, technical strength, and relevance to the workshop. All accepted papers will be published here.

Important dates


Submission Deadline: April 3, 2023
Acceptance Notification: April 14, 2023
Camera Ready Submission Deadline: May 19, 2022
Workshop date: June 8, 2023
All deadlines are at midnight anywhere on earth (AoE), and are firm.

Organization

Organizing Committee

  • Tom St. John, OctoML (tomstjohn617@gmail.com)

  • Murali Emani, Argonne National Laboratory (memani@anl.gov)

  • Wenqian Dong, Florida International University (wdong@fiu.edu)

Program Committee

Oana Balmau (McGill University)
Steven Farrell (Lawrence Berkeley National Laboratory)
Srivatsan Krishnan (Harvard)
Jae W. Lee (Seoul National University)
Dong Li (UC Merced)
Qian Li (Stanford)
Sid Raskar (Argonne National Laboratory)
Karthik Swaminathan (IBM)
Dingwen Tao (Indiana University)
Yu "Emma" Wang (Google)
Bo Yuan (Rutgers)