HPCaML 2019

The First International Workshop on
the Intersection of High Performance Computing and Machine Learning

February 16 1:00-5:00pm, 2019 @ Washington DC, USA.
Held in conjunction with the International Symposium on Code Generation and Optimization (CGO’19),
co-located with PPoPP, HPCA, and CC.


Call for Papers


In the last decade, machine learning has shown great power in solving many complex problems, such as image classification, speech recognition, auto-driving, machine translation, natural language processing, game playing, and healthcare analytics. Recently, it also attracts attention from scientific computing areas, including quantum chemistry, quantum physics, and mechanics, to develop domain-aware machine learning algorithms. To satisfy these broad needs, machine learning algorithms demand massive computing power, fast response time, and also low energy consumption. Innovations of both hardware design and software support are imperative.

From the other side, scientific, data-intensive, and also machine learning applications and algorithms need meticulous parameter tuning to achieve remarkable performance. It usually contains a huge tuning space for performance optimization. Such a space consists of various input features, algorithm variants, accuracy needs, and hardware platform impacts, etc. Machine learning is a good tool to automate this tuning process and maximize performance gains by traversing the tuning space without much human intervention, ensuring to draw the optimal while conserving portability and productivity.

The International Workshop on the Intersection of High Performance Computing and Machine Learning (HPCaML) is a new workshop targeting on research at their interpenetration effect: HPC-powered ML and ML-motivated HPC. The major objective is to bring researchers from these two domains to communicate their ideas, share knowledge of advanced technologies and new development on but not limited to the following topics:

  • Performance optimization of machine learning algorithms
  • Programming models and tools for machine learning
  • Machine learning model compression algorithms
  • Hardware-aware machine learning model synthesis
  • Power-efficient algorithms for machine learning
  • Specialized hardware architecture for machine learning
  • Machine learning based performance tuning
  • Machine learning based compiler techniques
  • Machine learning based power efficient algorithms

Important Dates


Paper Submission: December 14, 2018 December 21, 2018
Author Notification: January 14, 2019 January 21, 2019
Workshop: February 16, 2019
All dates are Anywhere on Earth (AOE).




Submission Site: https://easychair.org/conferences/?conf=hpcaml19

As a “fresh” workshop, we plan to make it more discussion-oriented. Papers describing in-progress or recently published work with innovative ideas are both welcomed. We invite 2-page double-column with 10-point font for submission, excluding references, appendices. Please follow the ACM proceeding sigconf template (https://www.acm.org/publications/proceedings-template). Kindly note that the submission will not appear in any proceedings so it can be further developed and submitted to a formal conference or journal for publication.





Jiajia Li, Pacific Northwest National Laboratory (Jiajia.Li@pnnl.gov)
Guoyang Chen, Alibaba Group US Inc. (gychen1991@gmail.com)
Shuaiwen Leon Song, Pacific Northwest National Laboratory (Shuaiwen.Song@pnnl.gov)
Guangming Tan, Institute of Computing Technology, Chinese Academy of Sciences (tgm@ncic.ac.cn)
Weifeng Zhang, Alibaba Group US Inc. (weifeng.z@alibaba-inc.com)


Prasanna Balaprakash, Argonne National Laboratory
Aparna Chandramowlishwaran, UC Irvine
Shuai Che, Alibaba Group US Inc.
Jee Choi, IBM TJ Watson
Ang Li, Pacific Northwest National Laboratory
Jiajia Li, Pacific Northwest National Laboratory
Yingmin Li, Alibaba Group
Weifeng Liu, Norwegian University of Science and Technology
Xiaoyong Liu, Alibaba Group
Xu Liu, College of William and Mary
P. (Saday) Sadayappan, Ohio State University
Albert Sidelnik, NVIDIA Research
Shaden Smith, Intel Corporation
Jimeng Sun, Georgia Institute of Technology
Daniel Wong, University of California, Riverside
Hongxu Yin, Princeton University
Peng Zhou, Alibaba Group