Session descriptions and details - SWPC2013

Introduction to multithreading and OpenMP

This workshop gives an introduction to shared-memory parallel programming and optimization on modern multicore systems, focusing on OpenMP. This is the dominant shared-memory programming model in computational science. Parallelism, multicore architecture and the most important shared-memory programming models are discussed. These topics are then applied in the hands-on exercises.

When

April 16-17, 2013

Where

About the lecturer

Dr. Reinhold Bader studied physics and mathematics at the Ludwigs-Maximilians University in Munich, completing his studies with a PhD in theoretical solid-state physics in 1998. Since the beginning of 1999, he has worked at Leibniz Supercomputing Centre (LRZ) as a member of the scientific staff, being involved in HPC user support, procurements of new systems, benchmarking of prototypes in the context of the PRACE project, courses for parallel programming, and configuration management for the HPC systems deployed at LRZ. He is currently group leader of the HPC services group at LRZ, which is responsible for operation of all HPC-related systems and system software packages at LRZ.

To registration

Advanced multithreading/OpenMP

In this workshop, more advanced aspects of multithreading, shared-memory programming and OpenMP are considered. After a survey of modern multicore processor architectures and their capabilities and inherent bottlenecks, the dominant performance issues in shared-memory programming are described: synchronization overhead, ccNUMA locality and bandwidth saturation (in cache and memory). The influence of system topology and thread affinity on the performance of typical parallel programming constructs is shown. Multiple ways of probing system topology and establishing affinity, either by explicit coding or separate tools, are demonstrated, and the basic use of hardware counter measurements for performance analysis is discussed. Finally, a structured approach to performance engineering of serial and parallel code is introduced, which revolves around simple but effective performance modeling techniques. Hands-on exercises allow the students to apply the concepts right away.

When

23-24 April, 2013

Where

About the lecturers

Dr. Georg Hager holds a PhD in computational physics from the University of Greifswald. He has been working with high performance systems since 1995 and is now a senior research scientist in the HPC group at Erlangen Regional Computing Center (RRZE). Recent research includes architecture-specific optimization for current microprocessors, performance modeling on processor and system levels, and the efficient use of hybrid parallel systems. See his blog at http://blogs.fau.de/hager for current activities, publications, and talks.

Dr. Jan Treibig is a chemical engineer with a special focus on computational fluid dynamics and technical thermodynamics. He holds a PhD in computer science from the University of Erlangen-Nuremberg, and has worked for two years in the embedded automotive software industry as software developer, test engineer and quality manager. Since 2008 he is a postdoctoral researcher in the HPC group at Erlangen Regional Computing Center Erlangen (RRZE). His research activities revolve around low-level and architecture-specific optimization and performance modeling. He is also the author of the LIKWID tool suite, a set of command line tools created to support developers of high-performance multithreaded codes.

To registration

Message Passing Interface (MPI)

The Message Passing Interface (MPI) is a standardized library specification for message passing between different processes. In layman's terms: MPI provides mechanisms for handling the data communication in a parallel program. It is particularly suited for computational clusters, where the workstations are connected by an interconnection network (e.g. Infiniband, Gigabit Ethernet).
In this workshop, the applicability of MPI will be compared to other parallel programming paradigms such as OpenMP, Cuda and MapReduce. Next, the basic principles of MPI will be gradually introduced (Point-to-point communication, collective communication, MPI datatypes, etc). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. Finally, some more theoretical considerations regarding scalability of algorithms are presented.

When

May 8, 2013

Where

About the lecturer

Dr. Jan Fostier received his MS and PhD degree in physical engineering from Ghent University in 2005 and 2009 respectively. Currently, he is appointed assistant professor in the department of Information Technology (INTEC) at the same university. His main research interests are (parallel) algorithms for biological sciences, high performance computing and computational electromagnetics.

To registration