To be held along with Third Conference on Machine Learning and Systems (MLSys)
With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications. Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption; (iv) performance modeling and projections to next-generation hardware. Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.
We solicit both full papers (8-10 pages) and short/position papers (4 page). Submissions are not double blind (author names must be included). The page limit includes figures, tables, and appendices, but excludes references. Please use standard LaTeX or Word ACM templates. All submissions will need to be made via EasyChair (submission website: here). Each submission will be reviewed by at least three reviewers from the program committee. Papers will be reviewed for novelty, quality, technical strength, and relevance to the workshop. All accepted papers will be made available online and selected papers will be invited to submit extended versions to a journal after the workshop.
Paper submission deadline: January 15, 2020
Author Notification: January 27, 2020
Camera-ready papers due: February 28, 2020
Workshop Date: March 4, 2020
All deadlines are at midnight anywhere on earth (AoE), and are firm.
Organizing Committee
Murali Emani, Argonne National Laboratory/ALCF, (memani@anl.gov)
Tom St John, Tesla Inc. (tomstjohn617@gmail.com)
Program Committee
Gregory Diamos, Landing AI
Cody Coleman, Stanford University
Farzad Khorasani, Tesla
Trevor Gale, Stanford University
Ilya Sharapov, Cerebras Systems
Lizy John, UT Austin
Vijay Janapa Reddi, Harvard University
Shuaiwen Leon Song, University of Sydney
Xiaoming Li, University of Delaware
Zheng Wang, University of Leeds
Prasanna Balaprakash, Argonne National Laboratory
Ramesh Radhakrishnan, Dell
Steve Farrell, Lawrence Berkeley National Laboratory/NERSC
Nikoli Dryden, ETH Zurich
Jesmin Jahan Tithi, Intel
Rong Ge, Clemson University
Lisa Wu Wills, Duke University