SC is the International Conference for
 High Performnance Computing, Networking, Storage and Analysis

• Overview• Registration• Schedule• Keynote Speaker• Papers• Tutorials• Panels• Posters• Workshops• Birds-of-a-Feather• Doctoral Showcase• Awards• Disruptive Technologies• SC10 Challenges• Masterworks• Submissions Website


SC10 will feature fourteen workshops. These workshops will provide attendees with independently planned full-, half-, or multi-day sessions that complement the SC10 Technical Program and extend its impact by providing greater depth of focus. Workshops will be held on Sunday, November 14 and Monday, November 15. This year’s workshops are:

* 3rd Workshop on High Performance Computational Finance: Special Focus on Implementation Support Infrastructure
* 4th International Workshop on High-Performance Reconfigurable Computing Technology & Applications (HPRCTA'10)
* 5th Workshop on Workflows in Support of Large-Scale Science (WORKS10)
* Gateway Computing Environments (GCE10)
* Petascale Data Analytics on Clouds: Trends, Challenges and Opportunities
* Running a Lean and Productive HPC Center
* Verification, Validation and Uncertainty Analysis in High-Performance Computing
* 1st International Workshop on Performance Modeling, Benchmarking and Simulation of HPC Systems
* 2010 Workshop on Ultrascale Visualization
* 5th Petascale Data Storage Workshop (PDSW)
* ATIP 4th Workshop on HPC in China: Specialized Hardware & Software Development
* 3rd Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS)
* Scalable Algorithms for Large-Scale Systems
* Early Adopters PhD Workshop 2010

To register, visit:


Sunday, November 14

3rd Workshop on High Performance Computational Finance:
Special Focus on Implementation Support Infrastructure
Room: 272

Organizers: Matthew Dixon (University of California, Davis), David Daly (IBM T.J. Watson Research Center), Jose Moreira (IBM T.J. Watson Research Center)

The purpose of this workshop is to bring together practitioners, researchers, vendors, and scholars from the complementary fields of computational finance and high performance computing, in order to promote an exchange of ideas, discuss future collaborations and develop new research directions. Financial companies increasingly rely on high performance computers to analyze high volumes of financial data, automatically execute trades, and manage risk. As financial market data continues to grow in volume and complexity, and algorithmic trading grows in popularity, there is increased demand for computational power. A critical component of migrating to energy efficient accelerator platforms is the collective appraisal of implementation support infrastructure needed to develop compute intensive financial applications. Implementation support infrastructure for accelerator platforms will therefore be the theme for this year's workshop.

4th International Workshop on High-Performance Reconfigurable Computing
Technology & Applications (HPRCTA'10)
Room: 271

Organizers: Volodymyr Kindratenko (National Center for Supercomputing Applications), Tarek El-Ghazawi (George Washington University), Eric Stahlberg (Wittenberg University), Prasanna Sundararajan (Xilinx Inc)

High-Performance Reconfigurable Computing (HPRC) is a novel computing paradigm that offers a potential to improve performance and power efficiency of many computationally intensive scientific codes beyond of what is possible on today’s mainstream high-performance computers. HPRC systems rely on field-programmable gate arrays (FPGAs) for the direct hardware execution of computationally intensive kernels, which is a radical departure from the Von Neumann architecture. The academic community has been exploring this computing paradigm for over a decade and the technology has proven itself to be practical for a number of HPC applications. The goal of this workshop is to provide a forum for computational scientists that use reconfigurable computers and the developers of this technology to discuss the latest progress and trends in the field. Topics of interest include architectures, languages, compilation techniques, tools, libraries, run-time environments, performance modeling, benchmarks, algorithms, methodology, applications, trends, and the latest developments in the field of HPRC.

5th Workshop on Workflows in Support of Large-Scale Science (WORKS10)
Room: 278-279

Organizers: Ewa Deelman (Information Sciences Institute), Ian Taylor (Cardiff University)

Scientific workflows are now being used in a number of scientific disciplines such as astronomy, bioinformatics, earth sciences, and many others. Workflows provide a systematic way of describing the analysis and rely on workflow management systems to execute the complex analyses on a variety of distributed resources. This workshop focuses both on application experiences and the many facets of workflow management that focus at a number of levels ranging from job execution to service management and the coordination of data, service and job dependencies. The workshop covers a broad range of issues in the scientific workflow lifecycle that include (among others) designing workflow composition interfaces; workflow mapping techniques that may optimize the execution of the workflow; workflow enactment engines that deal with failures in the application and execution environment; and a number of computer science problems related to scientific workflows such as semantic technologies, compiler methods, fault detection and tolerance.

Gateway Computing Environments (GCE10)
Room: 274

Organizers: Marlon E. Pierce (Indiana University), Mary P. Thomas (San Diego State University), Nancy Wilkins-Diehr (San Diego Supercomputer Center)

Scientific portals and gateways are important components of many large-scale grid and cloud computing projects. They are characterized by web-based user interfaces and services that securely access grid and cloud resources, data, applications, and collaboration services for communities of scientists. As a result, the scientific gateway provides a user- and (with social networking) a community-centric view of cyberinfrastructure. Web technologies evolve rapidly, and trends such as cloud computing are changing the way many scientific users expect to interact with resources. Academic clouds are being created using open source cloud technologies. Important web standards such as Open Social and OAuth are changing the way web portals are built, shared, and secured. It is the goal of this workshop series to provide a venue for researchers to present pioneering, peer-reviewed work on these and other topics to the international science gateway community.

Petascale Data Analytics on Clouds:
Trends, Challenges and Opportunities 9:00-5:30
Room: 273

Organizers: Ranga Raju Vatsavai (Oak Ridge National Laboratory), Vipin Kumar (University of Minnesota), Alok Choudhary (Northwestern University)

Recent decade has witnessed data explosion, and petabyte sized data archives are not uncommon any more. Many traditional application domains are now becoming data intensive. It is estimated that organizations with high-performance computing infrastructures and data centers are doubling the amount of data that they are archiving every year. Processing large datasets using supercomputers alone is not an economical solution. Cloud computing, which is a large-scale distributed computing, has attracted significant attention of both industry and academia in recent years and is fast becoming a cheaper alternative to costly centralized systems. Many recent studies have shown the utility of cloud computing in data mining and knowledge discovery. This workshop intends to bring together researchers, developers, and practitioners from academia, government, and industry to discuss new and emerging trends in cloud computing technologies, programming models, and software services and outline the DM/KD approaches that can efficiently exploit this modern computing infrastructure. Please visit: for more information.

Running a Lean and Productive HPC Center
Room: 282

Organizers: William F. Tschudi (Lawrence Berkeley National Laboratory), Michael K. Patterson (Intel Corporation)

Attendees will learn about ways to improve the procurement, operation, and energy efficiency of high performance computing equipment and the facilities that support it. This workshop will draw upon material developed by DOE's Industrial Technology Program and the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE) and other materials developed by HPC and energy efficiency experts. This workshop will provide an introduction to assessment tools, environmental recommendations, best practices, and ideas for future procurements and future efficiency improvements in HPC power and cooling systems. The workshop will look holistically at efficiency opportunities by considering computing equipment and the facility as an integrated computing platform. It will provide an introduction to DOE's free DC Pro Tool Suite which can be useful in performing energy assessments on existing HPC facilities, identifying efficiency opportunities, and tracking improvements.

Verification, Validation and Uncertainty Analysis in High-Performance Computing 1:30-5:00
Room: 282

Organizers: Laura L. Pullum (Oak Ridge National Laboratory), Robert M. Patton (Oak Ridge National Laboratory), Thomas E. Potok (Oak Ridge National Laboratory)

High-performance computing applications (HPC) have historically advanced the frontier of software complexity, and next generation HPC environments will increase substantially further. The nature of HPC introduces verification, validation and uncertainty analysis (VV&U) challenges that are perhaps unique to the field. Many HPC applications are simulation-oriented which further exacerbates the difficulties by introducing additional validation requirements and possibilities for uncertainty in the results. Unfortunately, HPC application software VV&U do not have a strong tradition since most of the work related to this area has been heavily focused on tolerance to faults due to hardware failure. This workshop will provide a forum for evaluating, sharing, and creating ideas for validation, verification, and uncertainty analysis of HPC applications. For more information go to

Monday, November 15

1st International Workshop on Performance Modeling, Benchmarking
and Simulation of HPC Systems
Room: 278-279

Organizers: Stephen Jarvis (Chair, University of Warwick), Todd Gamblin (Lawrence Livermore National Laboratory), Simon Hammond (University of Warwick), Curtis Janssen (Sandia National Laboratories), Arun Rodrigues (Sandia National Laboratories), Ash Vadgama (Atomic Weapons Establishment)

This workshop is concerned with the comparison of HPC systems through performance modeling, benchmarking or through the use of tools such as simulators. We are particularly interested in the ability to measure and make tradeoffs in software/hardware co-design to improve sustained application performance. We are also concerned with the assessment of future systems to ensure continued application scalability through peta- and exascale systems. The aim of this workshop is to bring together researchers, from industry and academia, concerned with the qualitative and quantitative evaluation and modeling of HPC systems. Authors are invited to submit novel research in all areas of performance modeling, benchmarking and simulation, and we welcome research that brings together current theory and practice. We recognize that the coverage of the term 'performance' has broadened to include power consumption and reliability, and that performance modeling is practiced through analytical methods and approaches based on software tools and simulators. For more information go to:

2010 Workshop on Ultrascale Visualization
Room: 273

Organizers: Kwan-Liu Ma (University of California, Davis), Michael Papka (Argonne National Laboratory)

The output from leading-edge scientific simulations and experiments is so voluminous and complex that advanced visualization techniques are necessary to interpret the calculated results. Even though visualization technology has progressed significantly in recent years, we are barely capable of exploiting petascale data to its full extent, and exascale datasets are on the horizon. This workshop aims at addressing this pressing issue by fostering communication between visualization researchers and the users of visualization. Attendees will be introduced to the latest and greatest research innovations in large data visualization, and also learn how these innovations impact scientific supercomputing and discovery process.

3rd Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS)
Room: 271

Organizers: Ioan Raicu (Illinois Institute of Technology), Ian Foster (University of Chicago/Argonne National Laboratory), Yong Zhao (University of Electronic Science and Technology of China)

The 3rd workshop on Many-Task Computing on Grids and Supercomputers (MTAGS10) will provide the scientific community a dedicated forum for presenting new research, development, and deployment efforts of large-scale many-task computing (MTC) applications on large scale clusters, Grids, Supercomputers, and Cloud Computing infrastructure. MTC, the theme of the workshop encompasses loosely coupled applications, which are generally composed of many tasks (both independent and dependent tasks) to achieve some larger application goal. This workshop will cover challenges that can hamper efficiency and utilization in running applications on large-scale systems, such as local resource manager scalability and granularity, efficient utilization of raw hardware, parallel file system contention and scalability, data management, I/O management, reliability at scale, and application scalability. This workshop encourages interaction and cross-pollination between those developing applications, algorithms, software, hardware and networking, emphasizing many-task computing for large-scale distributed systems.

5th Petascale Data Storage Workshop
Room: 280-281

Organizer: Garth A. Gibson (Carnegie Mellon University / Panasas Inc.)

Petascale computing infrastructures make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability. This one-day workshop focuses on the data storage problems and emerging solutions found in petascale scientific computing environments, with special attention to issues in which community collaboration can be crucial, problem identification, workload capture, solution interoperability, standards with community buy-in, and shared tools. This workshop seeks contributions on relevant topics, including but not limited to: performance and benchmarking results and tools, failure tolerance problems and solutions, APIs for high performance features, parallel file systems, high bandwidth storage architectures, wide area file systems, metadata intensive workloads, autonomics for HPC storage, virtualization for storage systems, data-intensive and cloud storage, archival storage advances, resource management innovations, etc. Submitted extended abstracts (up to 5 pages, due Sept 17. 2010) will be peer reviewed for presentation and publication on and in the ACM or IEEE digital library.

ATIP 4th Workshop on HPC in China: Specialized Hardware & Software Development 9:00-5:30
Room: 272

Organizer: David Kahaner (Asian Technology Information Program)

This workshop will include a significant set of presentations, posters, and panels from a delegation of Chinese academic, research laboratory, and industry experts and graduate students. Topics will include government support for the research, development, and utilization of special purpose hardware, including GPU and self-developed processors, and applications will be stressed. Industry speakers will provide perspectives of the importance of hardware and software solutions for real applications. A special effort will be made to include HPC developments in Hong Kong. A panel discussion will identify topics suitable for collaborative research and mechanisms for developing those collaborations. The workshop will provide a unique opportunity for members of the US research community to interact and have direct discussions with top Chinese scientists. A specific goal of the Workshop is to motivate preparation of joint research proposals with researchers from US and China.

Scalable Algorithms for Large-Scale Systems
Room: 274

Organizers: Vassil Alexandrov (University of Reading), Christian Engelmann (Oak Ridge National Laboratory), Al Geist (Oak Ridge National Laboratory)

Novel scalable scientific algorithms are needed to enable key science applications to exploit the computational power of large-scale systems. This is especially true for the current tier of leading petascale machines and the road to exascale computing. These extreme-scale systems require novel scientific algorithms to hide network and memory latency, have very high computation/communication overlap, have minimal communication, have no synchronization points. Scientific algorithms for multi-petaflop and exaflop systems also need to be fault-tolerant and fault-resilient, since the probability of faults increases with scale. Resilience at the system software and at the algorithmic level is needed as a crosscutting effort. Finally, with the advent of heterogeneous compute nodes employing standard processors as well as GPGPUs, scientific algorithms need to match these architectures to achieve maximum performance. Key science applications require novel mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.

Early Adopters Ph.D. Workshop 2010
Room: 252-253

Organizers: David Abramson (Monash University), Wojtek James Goscinski (Monash University), Daniel Katz (University of Chicago), Karen Haines (University of Western Australia), David Gavaghan (University of Oxford), Dieter Kranzlmueller (Ludwig-Maximilians-University Munich)

High performance computing (HPC) has become an essential tool used to study real world problems of significant scale and detail across a wide range of fields. However, successfully applying HPC can be a challenging undertaking to newcomers. This workshop provides graduate students who are adopting HPC an opportunity to present early stage research and gain valuable feedback. A panel of expert reviewers with significant experience will be invited to critique students’ work and provide constructive critique. The goal of this workshop is to help students identify shortcomings, introduce new approaches, discuss new technology, learn about relevant literature or define their future research goals.

To register, visit:


Click here to view the complete Conference Schedule

database connection failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)