Presenter(s):William Gropp, Ewing (Rusty) Lusk, Robert Ross, Rajeev Thakur
ABSTRACT: MPI continues to be the dominant programming model on all large-scale parallel machines, such as IBM Blue Gene/P and Cray XT5, as well as on Linux and Windows clusters of all sizes. Another important trend is the widespread availability of multicore chips, as a result of which the individual nodes of parallel machines are increasingly multicore. This tutorial will cover several advanced features of MPI that can help users program such machines and architectures effectively. Topics to be covered include parallel I/O, multithreaded communication, one-sided communication, dynamic processes, and new features being added in MPI-3 for hybrid programming, RMA, fault tolerance, and tools. In all cases, we will introduce concepts by using code examples based on scenarios found in real applications. Attendees will leave the tutorial with an understanding
of how to use these advanced features of MPI and guidelines on how they might perform on different platforms and architectures.
William Gropp - University of Illinois at Urbana-Champaign