MPICH Training Course

Overview

MPICH is an open-source, portable message passing interface (MPI) standard that provides a simplified MPI implementation in various computation and communication platforms.

This instructor-led, live training (online or onsite) is aimed at developers and programmers who wish to install, configure, and manage MPICH features.

By the end of this training, participants will be able to implement, write, manage, and monitor MPI programs using MPICH.

Format of the Course

  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.

Course Customization Options

  • To request a customized training for this course, please contact us to arrange.

Requirements

  • Experience in programming languages such as C, C++, and Fortran

Audience

  • Developers
  • Programmers

Course Outline

Introduction

Overview of Message Passing Interface (MPI) Features and Architecture

  • Parallel computing basics
  • The MPI process

Getting Started with MPICH

  • Installation and configuration options
  • Shared libraries
  • Installing process managers

Programming Basics with MPI

  • Writing, compiling, and linking programs
  • Compilation commands
  • Using Makefiles

Running Programs with MPI

  • Standard mpiexec
  • Process management extensions
  • Remshell restrictions

Sending and Receiving Messages

  • Message-passing routines
  • Buffer and types (tags)
  • Using library calls
  • Broadcast and reduction

Coordinating Communications in MPI

  • Synchronization
  • Collective patterns, routines, and operations
  • Creating groups

Working with Buffering Issues

  • Blocking and non-blocking communication
  • Fairness in message-passing
  • Communication modes

Understanding Datatypes and Objects in MPI

  • Basic datatypes
  • Vectors and structures
  • Interleaving data
  • MPI objects and references

Writing Message-Passing Libraries

  • Attributes
  • Sequential sections
  • Managing and caching tags

Evaluating the Performance of Parallel Programs

  • The MPI timer
  • Profiling interface
  • Logging

Integrating Multiple Programs

  • Sending and exchanging data between programs
  • Using intercommunicators

Troubleshooting

Summary and Conclusion

Leave a Reply

Your email address will not be published. Required fields are marked *