First International Workshop on
Benchmarking Machine Learning Workloads on Emerging Hardware

To be held along with Third Conference on Machine Learning and Systems (MLSys)

Room 6, Austin Convention Center

Austin, TX

March 4, 2020

NEW : Accepted papers are available at here.
NEW : Everyone who plans to attend this workshop would need to register for the main conference.

Program

    09:00 AM - 09:10 AM Introduction: Tom St. John (Tesla Inc.)

    09:10 AM - 09:50 AM Keynote Address: “MLPerf Inference Deep Dive” Vijay Janapa Reddi (Harvard University)

    10:00 AM - 10:30 AM Morning break

    10:30 AM - 10:50 AM “Precious: Resource-Demand Estimation for Embedded Neural Network Accelerators”
    Stefan Raif (FAU Erlangen-Nürnberg), Benedict Herzog (FAU Erlangen-Nürnberg), Judith Hemp (FAU Erlangen-Nürnberg), Timo Hönig (FAU Erlangen-Nürnberg), Wolfgang Schröder-Preikschat (FAU Erlangen-Nürnberg)

    10:50 AM - 11:05 AM "Benchmarking Machine Learning Workloads in Structural Bioinformatics Applications”
    Heng Ma (Argonne National Laboratory), Austin Clyde (Argonne National Laboratory), Venkatram Vishwanath (Argonne National Laboratory), Debsindhu Bhowmik (Oak Ridge National Laboratory), Arvind Ramanathan (Argonne National Laboratory), Shantenu Jha (Rutgers University, Brookhaven National Laboratory

    11:05 AM - 11:20 AM “Benchmarking Alibaba Deep Learning Applications Using AI Matrix"
    Wei Zhang (Alibaba Group), Wei Wei (Alibaba Group), Lingjie Xu (Alibaba Group), Lingling Jin (Alibaba Group)

    11:20 AM - 12 noon Invited Talk, “Formula One vs. Family Car, or the Need for Broader, Generalizable Benchmarks”, Natalia Vassilieva (Cerebras Systems)

    12 noon - 02:00 PM Lunch (on your own)

    02:00 PM - 02:40 PM Invited Talk, “Benchmarking Science: Datasets and Exascale Infrastructure”, Geoffrey Fox (Indiana University)

    02:45 PM - 03:00 PM “Deep Learning Workload Performance Auto-Optimizer”
    Connie Yingyu Miao (Intel Corporation), Andrew Yang (Intel Corporation), Michael Anderson (Intel Corporation)

    03:00 PM - 03:15 PM “Challenges with Evaluating ML Solutions in Data Centers"
    Shobhit Kanaujia (Facebook), Wenyin Fu (Facebook), Abhishek Dhanotia (Facebook)

    03:15 PM - 03:30 PM “Benchmarking TinyML Systems: Challenges and Directions”
    Colby Banbury (Harvard University), Vijay Janapa Reddi (Harvard University), Will Fu (Harvard University), Max Lam (Harvard University), Amin Fazel (Samsung Semiconductor Inc.), Jeremy Holleman (Syntiant, University of North Carolina Charlotte), Xinyuan Huang (Cisco Systems), Robert Hurtado (HurtadoTechnology Inc.), David Kanter (Real World Insights), Anton Lokhmotov (dividiti), David Patterson (University of California Berkeley, Google), Danilo Pau (STMicroelectronics), Jeff Sieracki (Reality AI), Jae-Sun Seo (Arizona State University), Urmish Thakkar (Arm), Marian Verhelst (KU Leuven, Imec), Poonam Yadav (University of York)

    03:30 PM - 04:00 PM Afternoon break

    04:00 PM - 05:20 PM Panel
    Peter Mattson (Google), Shuaiwen Song (University of Sydney), Gennady Pekhimenko (University of Toronto), Carole-Jean Wu (Facebook, Arizona State University), Grigori Fursin (CodeReef), Ramesh Radhakrishnan (Dell)

    05:20 PM - 05:30 PM Conclusion: Murali Emani (Argonne National Laboratory)

    About

    With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications. Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption; (iv) performance modeling and projections to next-generation hardware. Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.

    Call for Papers

    We solicit both full papers (8-10 pages) and short/position papers (4 page). Submissions are not double blind (author names must be included). The page limit includes figures, tables, and appendices, but excludes references. Please use standard LaTeX or Word ACM templates. All submissions will need to be made via EasyChair (submission website: here). Each submission will be reviewed by at least three reviewers from the program committee. Papers will be reviewed for novelty, quality, technical strength, and relevance to the workshop. All accepted papers will be made available online and selected papers will be invited to submit extended versions to a journal after the workshop.

    Important dates


    Paper submission deadline: January 15, 2020
    Author Notification: January 27, 2020
    Camera-ready papers due: February 28, 2020
    Workshop Date: March 4, 2020
    All deadlines are at midnight anywhere on earth (AoE), and are firm.

    Organization

    Organizing Committee

    • Murali Emani, Argonne National Laboratory/ALCF, (memani@anl.gov)

    • Tom St John, Tesla Inc. (tomstjohn617@gmail.com)

    Program Committee

    • Gregory Diamos, Landing AI

    • Cody Coleman, Stanford University

    • Farzad Khorasani, Tesla

    • Trevor Gale, Stanford University

    • Ilya Sharapov, Cerebras Systems

    • Lizy John, UT Austin

    • Vijay Janapa Reddi, Harvard University

    • Shuaiwen Leon Song, University of Sydney

    • Xiaoming Li, University of Delaware

    • Zheng Wang, University of Leeds

    • Prasanna Balaprakash, Argonne National Laboratory

    • Ramesh Radhakrishnan, Dell

    • Steve Farrell, Lawrence Berkeley National Laboratory/NERSC

    • Nikoli Dryden, ETH Zurich

    • Jesmin Jahan Tithi, Intel

    • Rong Ge, Clemson University

    • Lisa Wu Wills, Duke University