SC is the International Conference for
 High Performnance Computing, Networking, Storage and Analysis

SCHEDULE: NOV 13-19, 2010

Algorithm Based Recovery for HPL

SESSION: Student Poster Reception

EVENT TYPE: Poster, ACM Student Poster

TIME: 5:15PM - 7:00PM

AUTHOR(S):Teresa E. B. Davies

ROOM:Main Lobby

In parallel applications, the probability that a failure will occur before the computation has finished is greater the more processors are used. For some, it is essential that fault tolerance be used to prevent that all the work up to the point of the failure is lost. One of the most effective techniques is diskless checkpointing, in which a periodic snapshot of the process states is saved in another location. Unfortunately, the overhead of diskless checkpointing is proportional to the amount of data changed between checkpoints. For matrix operations, this overhead is high. A lower overhead means of making matrix operations fault tolerant is desirable. In this research, we modify a technique for checking the outcome of matrix operations in order to produce low overhead fault tolerance options for a variety of matrix operations. The technique, called algorithm based fault tolerance, appends a checksum to the matrix that is maintained by the operation and can be used to verify the result at the end of the calculation. We use this idea to show that, for some algorithms that compute matrix operations, the sum is also maintained at each step of the algorithm. Although this is not true of all algorithms, it is true for some that are widely used. The technique that uses the checksum at the end of the calculation is not new, but our use of the checksum to recover from failures in the middle of the calculation has not been done before. The right-looking LU decomposition used in HPL does maintain a checksum at each step. Therefore we have added fault tolerance to HPL using this method. This project is in the implementation stage, and is nearly ready for testing. Our approach is expected to significantly outperform checkpointing. Previous work on this project, adding a checksum to matrix multiplication and Cholesky decomposition, has shown that the overhead is significantly decreased. Additionally, the overhead as a fraction of the total time decreases as the problem size increases, in contrast to checkpointing, where the fraction of the total time remains constant.

Chair/Author Details:

Teresa E. B. Davies - Colorado School of Mines

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

   Sponsors    IEEE    ACM