HPCC was developed to study future Petascale computing systems, and is intended to provide a realistic measurement of modern computing workloads. HPCC is made up of seven common computational kernels: STREAM, HPL, DGEMM (matrix multiply), PTRANS (parallel matrix transpose), FFT, RandomAccess, and b_eff (bandwidth/latency tests). The benchmarks attempt to measure high and low spatial and temporal locality space. The tests are scalable, and can be run on a wide range of platforms, from single processors to the largest parallel supercomputers.
The HPCC benchmarks test three particular regimes: local or single processor, embarrassingly parallel, and global, where all processors compute and exchange data with each other. STREAM measures a processor's memory bandwidth. HPL is the LINPACKTPP (Toward Peak Performance) benchmark; RandomAccess measures the rate of random updates of memory; PTRANS measures the rate of transfer of very large arrays of data from memory; b_eff measures the latency and bandwidth of increasingly complex communication patterns.
All of the benchmarks are run in two modes: base and optimized. The base run allows no source modifications of any of the benchmarks, but allows generally available optimized libraries to be used. The optimized benchmark allows significant changes to the source code. The optimizations can include alternative programming languages and libraries that are specifically targeted for the platform being tested.
The team results of the HPCC portion of the Cluster Competition will be announced on Tuesday when the TOP500 committee meets with the public to announce the new TOP500 list. Cluster Competition Teams are encouraged to be present during this presentation.
A C compiler and an implementation of MPI are required to run the benchmark suite.
More information on HPCC can be found at:
Introduction to the HPC Challenge Benchmark Suite, by Dongarra and Luszczek http://icl.cs.utk.edu/projectsfiles/hpcc/pubs/sc06_hpcc.pdf
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. NAMD uses the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free-energy differences are present.
More information can be found at: http://www.ks.uiuc.edu/Research/namd/
Download NAMD at: http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD
The Weather Research and Forecasting (WRF) Model is a next-generation mesocale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.
More information can be found at: http://www.wrf-model.org/index.php
Download WRF at: http://www.mmm.ucar.edu/wrf/users/
FLASH is an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes. The FLASH code was developed to study the problems of nuclear flashes on the surfaces of neutron stars and white dwarfs, as well as in the interior of white dwarfs. The FLASH code solves the fully compressible, reactive hydrodynamic equations and allows for the use of adaptive mesh refinement. It also contains state-of-the-art modules for the equations of state and thermonuclear reaction networks.
More information can be found at: http://flash.uchicago.edu/website/home/
Download FLASH at: https://sites.google.com/site/sc10scc/
Distributed Password Auditing/Cracking
You and your cluster are part of a very large government organization. The organization has implemented a change to your internal password policies, and advised users to change their passwords. This having been done, your soulless auditors and your very grumpy internal security people wish to verify the effectiveness of the change - by auditing the passwords already in use on the systems. As such you have been provided with an anonymized list of password hashes. You will receive only the password hashes, one per line, in large data set files. Your auditors and internal security people suspect a large number of passwords in use on the systems may be "guessable" - based upon words or permutations of words.
Your job is to use the computational resources you have available to do the best job you can possibly do to recover as many passwords as possible from the data sets provided, and to report this information back to the very grumpy internal security people, and the soulless auditors. You not only need to make them aware of what passwords you could recover, but also the computational resources used to recover them, so that they may in turn have some degree of confidence of the effectiveness (or lack thereof) of their current policies.
More information can be found at: https://sites.google.com/site/sc10scc/
Download: Unlike many other challenges you are faced with here, the software and the techniques you may use are entirely up to you.
More information on the applications and sample data sets are available at https://sites.google.com/site/sc10scc/.