Workshop on Large-Scale Parallel Processing
to be held at the
IEEE International Parallel and Distributed Processing Symposium
May 19th - 23rd, 2014
Preliminary schedule is now available
Keynote Presentation at 8:45am
Exascale Network Grand Challenges
While much attention has been focused on the grand challenge of scaling compute capability to Exascale systems, there has been less hoopla around how to scale the interconnection networks for such beasts. While historical supercomputer bytes-per-flop (or per-op) ratios remain desirable, there are a number of formidable challenges to this nirvana. At the top of the list, the cost per byte and energy per byte for node-to-node communication is not on the same steep curve as compute engines. Additionally, network hardware and software will be stressed by the growing number of network endpoints: 10s or 100s of millions of compute engines are required to reach Exascale. Reliability is yet another challenge that is growing with both system scale and semiconductor process scaling. In this talk we will outline a set of network ideas and directions to overcome or cope with these realities. Along the way, we will take a closer look at several areas which will impact Exascale network scaling: optics technologies, topologies, accelerators, and scalable messaging architecture.
Invited Presentation at 1:45pm
Massively Multithreaded Algorithms for Graph Matching and Coloring
Pacific Northwest National Laboratory (PNNL)
A wide range of problems in diverse fields of science and engineering can be efficiently solved by formulating them as graph problems. However, graph algorithms are challenging to implement on traditional high performance computing architectures. In this presentation, we will discuss some of these challenges that limit performance and show how to address them using graph matching and coloring as case studies. We will consider traditional multicore and manycore architectures as well as non-traditional massively-multithreaded (Cray XMT) platforms, and highlight the architectural features that influence the design of parallel algorithms.
The workshop on Large-Scale Parallel Processing is a forum that focuses on computer systems that utilize thousands of processors and beyond. Large-scale systems, referred to by some as extreme-scale and Ultra-scale, have many important research aspects that need detailed examination in order for their effective design, deployment, and utilization to take place. These include handling the substantial increase in multi-core on a chip, the ensuing interconnection hierarchy, communication, and synchronization mechanisms. Increasingly this is becoming an issue of co-design involving performance, power and reliability aspects. The workshop aims to bring together researchers from different communities working on challenging problems in this area for a dynamic exchange of ideas. Work at early stages of development as well as work that has been demonstrated in practice is equally welcome.
Of particular interest are papers that identify and analyze novel ideas rather than providing incremental advances in the following areas:
Large-scale systems: exploiting parallelism at large-scale, the coordination of large numbers of processing elements, synchronization and communication at large-scale, programming models and productivity
Novel architectures and experimental systems : the design of novel systems, the use emerging technologies such as Non-Volatile Memory, Silicon Photonics, application-specific accelerators and future trends.
Monitoring, Analysis, and Modeling: tools and techniques for gathering performance, power, thermal, reliability, and other data from existing large scale systems, analyzing such data offline or in real time for system tuning, and modeling of similar factors in projected system installations.
Multi-core: utilization of increased parallelism on a single chip, the possible integration of these into large-scale systems, and dealing with the resulting hierarchical connectivity.
Energy Management: Techniques, strategies, and experiences relating to the energy management and optimization of large-scale systems.
Applications: novel algorithmic and application methods, experiences in the design and use of applications that scale to large-scales, overcoming of limitations, performance analysis and insights gained.
Warehouse Computing: dealing with the issues in advanced datacenters that are increasingly moving from co-locating many servers to having a large number of servers working cohesively, impact of both software and hardware designs and optimizations to achieve best cost-performance efficiency.
Results of both theoretical and practical significance will be considered, as well as work that has demonstrated impact at small-scale that will also affect large-scale systems. Work may involve algorithms, languages, various types of models, or hardware. A list of papers presented at previous LSPP workshops can be found here.
|Selected work presented at the workshop will be published in a special issue of Parallel Processing Letters in late 2014. Special issues of Parallel Processing Letters from LSPP workshops previously appeared in 2013, 2011, 2010, 2009 and 2008.|
Papers should not exceed ten single-space pages (including figures, tables and references) using a 12-point on 8½x11-inch pages. Submissions in PostScript or PDF should be made using EDAS. Informal enquiries can be made to Darren Kerbyson. Submissions will be judged on correctness, originality, technical strength, significance, presentation quality and appropriateness. Submitted papers should not have appeared in or under consideration for another venue.
|Submission opens:||October 21st 2013|
|Papers due:||January 21st 2014 **Extended **|
|Notification of acceptance:||Februrary 20th 2014|
|Camera-Ready Papers due:||March 14th 2014|
|Darren J. Kerbyson||Pacific Northwest National Laboratory|
|Ram Rajamony||IBM Austin Research Lab|
|Charles Weems||University of Massachusetts|
|Johnnie Baker||Kent State University|
|Alex Jones||University of Pittsburgh|
|H.J. Siegel||Colorado State University||Lixin Zhang||Institute of Computing Techology, Chinese Academy of Sciences||Guangming Tan||Institute of Computing Techology, Chinese Academy of Sciences|
|Pavan Balaji||Argonne National Laboratory, USA|
|Kevin J. Barker||Pacific Northwest National Laboratory, USA|
|Laura Carrington||San Diego Supercomputer Center, USA|
|I-Hsin Chung||IBM T.J. Watson Research Lab, USA|
|Tim German||Los Alamos National Laboratory, USA|
|Georg Hager||University of Erlangen, Germany|
|Simon Hammond||Sandia National Laboratory, USA|
|Martin Herbordt||Boston University, USA|
|Daniel Katz||University of Chicago, USA|
|Celso Mendes||University of Illinois Urbana-Champagne, USA|
|Bernd Mohr||Forschungszentrum Juelich, Germany|
|Phil Roth||Oak Ridge National Laboratory, USA|
|Jose Sancho||Barcelona Supercomputer Center, Spain|
|Gerhard Wellein||University of Erlangen, Germany|
|Pat Worley||Oak Ridge National Laboratory, USA|
|Ulrike Yang||Lawrence Livermore National Laboratory, USA|
Workshop General Chair and point of contact: Darren J. Kerbyson