Session descriptions and details - SWPC2014

Introduction to multithreading and OpenMP

This workshop gives an introduction to shared-memory parallel programming and optimization on modern multicore systems, focusing on OpenMP. This is the dominant shared-memory programming model in computational science. Parallelism, multicore architecture and the most important shared-memory programming models are discussed. These topics are then applied in the hands-on exercises.

When

April 14-15, 2014

Where

About the lecturer

Dr. Reinhold Bader studied physics and mathematics at the Ludwigs-Maximilians University in Munich, completing his studies with a PhD in theoretical solid-state physics in 1998. Since the beginning of 1999, he has worked at Leibniz Supercomputing Centre (LRZ) as a member of the scientific staff, being involved in HPC user support, procurements of new systems, benchmarking of prototypes in the context of the PRACE project, courses for parallel programming, and configuration management for the HPC systems deployed at LRZ. He is currently group leader of the HPC services group at LRZ, which is responsible for operation of all HPC-related systems and system software packages at LRZ.

To registration

Performance Tuning of OpenMP programs

This one-day workshop provides guidance to performance tuning of OpenMP programs for current processor architectures. It focuses on characteristic programming patterns, but also discusses which architecture-specific features impact the achievable performance. Hands-on sessions allow the participants to perform tuning on codes of their own or on provided examples.
Participating in the workshop requires prior knowledge of OpenMP semantics as well as good knowledge in one of the HPC languages Fortran, C or C++.

When

April 16, 2014

Where

  • KULeuven, Opleidingscentrum D, KU Leuven, Willem De Croylaan 52A BE-3001 Heverlee, Belgium

About the lecturer

Dr. Reinhold Bader studied physics and mathematics at the Ludwigs-Maximilians University in Munich, completing his studies with a PhD in theoretical solid-state physics in 1998. Since the beginning of 1999, he has worked at Leibniz Supercomputing Centre (LRZ) as a member of the scientific staff, being involved in HPC user support, procurements of new systems, benchmarking of prototypes in the context of the PRACE project, courses for parallel programming, and configuration management for the HPC systems deployed at LRZ. He is currently group leader of the HPC services group at LRZ, which is responsible for operation of all HPC-related systems and system software packages at LRZ.

To registration

Message Passing Interface (MPI)

The Message Passing Interface (MPI) is a standardized library specification for message passing between different processes. In layman's terms: MPI provides mechanisms for handling the data communication in a parallel program. It is particularly suited for computational clusters, where the workstations are connected by an interconnection network (e.g. Infiniband, Gigabit Ethernet).
In this workshop, the applicability of MPI will be compared to other parallel programming paradigms such as OpenMP, Cuda and MapReduce. Next, the basic principles of MPI will be gradually introduced (Point-to-point communication, collective communication, MPI datatypes, etc). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. Finally, some more theoretical considerations regarding scalability of algorithms are presented.

When

April 22, 2014

Where

About the lecturer

Dr. Jan Fostier received his MS and PhD degree in physical engineering from Ghent University in 2005 and 2009 respectively. Currently, he is appointed assistant professor in the department of Information Technology (INTEC) at the same university. His main research interests are (parallel) algorithms for biological sciences, high performance computing and computational electromagnetics.

To registration