NAG Parallel Processing Methods workshop 30 January 2017

NAG Parallel Processing Methods workshop 30 January 2017

On Monday 30 January IT Services hosted a free-of-charge, with lunch provided, workshop on “Introduction to Parallel Programming Methods” which was open to all members of the University and attracted over 30 delegates. The workshop was presented by Sally Bridgewater and Ning Li from the Numerical Algorithms Group (NAG), who have a long-standing relationship with the University of Birmingham and have previously delivered other HPC-related training courses here.

This workshop gave an introduction to Parallel Programming Methods and was of general interest to current and prospective users of the central BlueBEAR HPC service as well as anyone with a general interest in applying parallel programming to their own research areas. No prior knowledge of parallel programming was required, although familiarity with a high-level programming language such as Fortran or C/C++ was beneficial.

After this workshop delegates had an understanding of parallel programming techniques which could be built upon by self-study using the University HPC service or attending more advanced training.

The agenda was:

  • 9.30 - 10.00: Arrival and Registration
  • 10.00 - 11.00: Introduction (Presentation, PDF 9020 KB)
    This gave a high-level overview of processor, memory and interconnect configuration, including the importance of memory hierarchy, leading into a discussion of core Parallel Programming concepts and techniques, including the 2 main parallel programming concepts of shared memory programming within a multi-core server and distributed memory programming between servers. The limit on the speedup of a complete program that can result from the optimisation of a parallel segment were introduced (Amdahl's Law).
  • 11.00 - 11:15: Break
  • 11:15 - 12:45: OpenMP (Presentation, PDF 5634 KB)
    OpenMP (Open Multi-Processing) is the standard programming model for parallel applications that run on a single server, where each core has access to all of the memory. Examples were given of fundamental considerations for OpenMP programming, including a more in-depth discussion of memory hierarchy which was introduced in the previous talk.
  • 12:45 - 14:00: Lunch
  • 14:00 - 15:30: MPI (Presentation, PDF 6211 KB)
    MPI (Message Passing Interface) is the standard programming model for parallel programming across servers where each server only sees its own memory and communication with other servers has to be explicitly programmed. It is more complicated to program than OpenMP but is essential for programming clusters such as the BlueBEAR HPC service.
  • 15:30 - 15:45: Break
  • 15:45 - 16:45: Profiling and tools (Presentation, PDF 3489 KB)
    It is essential to be able to understand the performance of a parallel program, since inefficiency in one area of the program can greatly increase the execution time of a given code when compared to the execution time of the same code without these inefficiencies - often just a single inefficiency. This talk gave brief examples of where common inefficiencies occur and demonstrated some of the tools that can be used to profile a parallel code to identify such inefficiencies.
  • 16:45 - 17:00: Wrap-up
    This final section reviewed the day, identified and discussed any areas of common concern and suggested further resources for individual training and other HPC courses that are available.

Last Updated: 14 February 2017