Skip to Main Content U.S. Department of Energy
HIPS 2015

Joint International Workshop on
High-level Parallel Programming Models and Supportive Environments (HIPS) and

Large-Scale Parallel Processing (LSPP)

IPDPS15

Hyderabad International Convention Centre
Hyderabad, India

To be held in conjunction with 29th IEEE International Parallel & Distributed Processing Symposium.

This year, the long-running HIPS and LSPP workshops will be jointly held as a full-day meeting on 25 May, 2015 at the IPDPS 2015 conference in Hyderabad. Each workshop comittee individually evaluated the papers it received. You can find information on the organization and program committee in the individual workshop pages:

HIPS: International Workshop on High-level Parallel Programming Models and Supportive Environments

LSPP:Workshop on Large-Scale Parallel Processing

Advance Program

8:15 - 8:30 Workshop opening
Sriram Krishnamoorthy (PNNL) and Tobias Hilbrich (TU Dresden)
8:30 - 9:30 Keynote Talk
Prof. Torsten Hoefler, ETH Zürich
"How fast will your application go? Static and dynamic techniques for application performance modeling" [talk]
9:30 - 10:00 Coffee Break
10:00 - 12:00 Session I: Performance Analysis and Optimization
Chair: Guido Juckeland (TU Dresden)
10:00 - 10:30 "Folding Methods for Event Timelines in Performance Analysis" [talk]
Matthias Weber, Ronald Geisler, Holger Brunst, and Wolfgang E. Nagel
10:30 - 11:00 "Performance Analysis for Target Devices with the OpenMP Tools Interface" [talk]
Tim Cramer, Robert Dietrich, Christian Terboven, Matthias S. Müller, and Wolfgang E. Nagel
11:00 - 11:30 "High-Performance Coarray Fortran Support with MVAPICH2-X: Initial Experience and Evaluation" [talk]
Jian Lin, Khaled Hamidouche, Xiaoyi Lu, Mingzhe Li, and Dhabaleswar Panda
11:30 - 12:00 "On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI" [talk]
Sourav Chakraborty, Hari Subramoni, Jonathan Perkins, Ammar Ahmad Awan, and Dhabaleswar Panda
12:00 - 1:30 Lunch Break
1:30 - 2:30 Keynote Talk
Prof. Laxmikant Kale, UIUC
"Some Do's and Don'ts for Designing Parallel Languages" [talk]
2:30 - 3:00 Session II: Parallelization
2:30 - 3:00 "Speculative Runtime Parallelization of Loop Nests: Towards Greater Scope and Efficiency" [talk]
Aravind Sukumaran-Rajam, Luis Esteban Campostrini, Juan Manuel Martinez Caamaño, and Philippe Clauss
3:00 - 3:30 Coffee Break
3:30 - 4:30 Session III: Application-specific Studies
Chair: Felix Wolf (German Research School for Simulation Sciences)
3:30 - 4:00 "On the Impact of Execution Models: A Case Study in Computational Chemistry" [talk]
Daniel Chavarría-Miranda, Mahantesh Halappanavar, Sriram Krishnamoorthy, Joseph Manzano, Abhinav Vishnu, and Adolfy Hoisie
4:00 - 4:30 "Computing the Pseudo-Inverse of a Graph's Laplacian using GPUs" [talk]
Nishant Saurabh, Ana Lucia Varbanescu, and Gyan Ranjan

Keynote Talks

Prof. Torsten Hoefler , ETH Zürich

Title: "How Fast Will Your Application Go? Static and Dynamic Techniques for Application Performance Modeling"

Abstract: Many parallel applications suffer from latent performance limitations that may prevent them from utilizing resources efficiently while scaling to larger parallelism. Often, such scalability bugs manifest themselves only when an attempt to scale the code is actually being made---a point where remediation can be difficult. However, creating analytical performance models that would allow such issues to be pinpointed earlier is so laborious that application developers attempt it at most for a few selected kernels, running the risk of missing harmful bottlenecks. We discuss how to generate performance models for program scalability to identify scaling bugs early and automatically. We then briefly discuss key limitations of our first approach and show various techniques how to remedy them and enable efficient multi-parameter modeling that can be employed for hardware/software co-design. We will complete the overview by summarizing key research challenges that need to be addressed to enable fully automatic and accurate performance modeling.

Bio: Torsten is an Assistant Professor of Computer Science at ETH Zürich, Switzerland. Before joining ETH, he led the performance modeling and simulation efforts of parallel petascale applications for the NSF-funded Blue Waters project at NCSA/UIUC. He is also a key member of the Message Passing Interface (MPI) Forum where he chairs the "Collective Operations and Topologies" working group. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference SC10, SC13, SC14, EuroMPI 2013, IPDPS 2015, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. His research interests revolve around the central topic of "Performance-centric Software Development" and include scalable networks, parallel programming techniques, and performance modeling. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.



Prof. Laxmikant Kale , UIUC

Title:"Some Do's and Don'ts for Designing Parallel Languages"

Abstract: Due to a combination of circumstances, the field and application developers are open to new "languages". The circumstances include impending architectural changes, with a lot more dynamic variability, as well as application changes, with a lot more emphasis on multi-physics simulations, sub-scale simulations, and dynamic adaptivity. I will consider a set of prescriptive ideas for the new era of parallel language design, which arise from answering questions like: what should be the division of labor between the programmer and programming system? Should the increasing hierarchical nature of the machine be matched by hierarchical languages? What is the role of syntax, static analysis and compilers in engendering programming productivity of a languages? In addition, I will also examine some slippery concepts like "tasks", and "moving computation to data". I will end with a utopian view of an ecosystem of future parallel programming languages, frameworks, and tools.

Bio: Professor Laxmikant Kale is the director of the Parallel Programming Laboratory and a Professor of Computer Science at the University of Illinois at Urbana-Champaign. Prof. Kale has been working on various aspects of parallel computing, with a focus on enhancing performance and productivity via adaptive runtime systems, and with the belief that only interdisciplinary research involving multiple CSE and other applications can bring back well-honed abstractions into Computer Science that will have a long-term impact on the state-of-art. His collaborations include the widely used Gordon-Bell award winning (SC'2002) biomolecular simulation program NAMD, and other collaborations on computational cosmology, quantum chemistry, rocket simulation, space-time meshes, and other unstructured mesh applications. He takes pride in his group's success in distributing and supporting software embodying his research ideas, including Charm++, Adaptive MPI and the BigSimframework. He and his team recently won the HPC Challenge award at Supercomputing 2011, for their entry based on Charm++.
L. V. Kale received the B.Tech degree in Electronics Engineering from Benares Hindu University, Varanasi, India in 1977, and a M.E. degree in Computer Science from Indian Institute of Science in Bangalore, India, in 1979. He received a Ph.D. in computer science in from State University of New York, Stony Brook, in 1985.
He worked as a scientist at the Tata Institute of Fundamental Research from 1979 to 1981. He joined the faculty of the University of Illinois at Urbana-Champaign as an Assistant Professor in 1985, where he is currently employed as a Professor. Prof. Kale is a fellow of the IEEE, and a winner of the 2012 IEEE Sidney Fernbach award.