SC is the International Conference for
 High Performnance Computing, Networking, Storage and Analysis

• Overview• Registration• Schedule• Keynote Speaker• Papers• Tutorials• Panels• Posters• Workshops• Birds-of-a-Feather• Doctoral Showcase• Awards• Disruptive Technologies• SC10 Challenges• Masterworks• Submissions Website




Masterworks

Summary


Masterworks consists of invited presentations that highlight innovative ways of applying high-performance computing, networking, and storage technologies to the world's most challenging problems. At SC10 you can hear the masters describe how innovations in computing are fueling new approaches to addressing the toughest and most complex questions of our time.

Big Science, Big Data


From the earth to the sky to the stars and beyond, our observational and simulated datasets are growing at unprecedented rates. This tremendous growth has stimulated a new look at algorithms for big science and new tools for capturing, managing, and analyzing big data. In this two-session set, Tuesday 10:30 am - noon and 1:30-3:00 pm, four experts focus on data-intensive computing, with ties to climate simulation and heterogeneous computing.

Genomics-Driven Biology


The explosion of genomic data from today's rapid sequencing engines is fueling new ways of asking and answering biological questions, where large-scale computing and storage systems have a pivotal role. In this session, Tuesday 3:30-5:00 pm, visionary leaders look at how data-intensive sequencing meets supercomputing to forge real-time biology.

Heterogeneous Computing: Toward Exascale


Architectural innovation will be a major factor in the transition from petascale to exascale computing, and heterogeneous architectures are strong contenders. In this session, Wednesday 10:30 am - noon, two experts explore the new programming paradigms and software models for heterogeneous systems that show promise to deal with the strengths and constraints of systems in the exascale era.

Weather to Climate and Back Again I, II


Nowhere is the challenge of temporal and predictive modeling of greater socioeconomic relevance than in climate and weather prediction. From what we’ll wear today, to water supply and population density planning in the next century, the fidelity and credibility of Earth system simulation are crucial. In this two-session set, Wednesday 1:30-3:00 pm and 3:30-5:00 pm, leaders in the field will explore both modeling in this arena and how to enable nonexperts to utilize the results.

Climbing the Computational Wall


While the exponential factor inherent in Moore’s law approximates the observed ~6 orders of magnitude increase in processing performance over the past 40 years, this scaling is dwarfed when compared to the ~30 orders of magnitude scale between chemical simulation and astronomical observation. Yet the ability to realistically simulate atomic interactions and accurately observe extragalactic atomic transition at radio wavelengths is increasingly driven by our capacity to build, power, and program the next generation of computer systems. In this session, Thursday 10:30 am - noon, key visionaries in their respective fields will probe the need for extreme levels of parallelism and computational efficiency as we undertake this critical endeavor.

Beyond Peta – HPC Futures


The next generation of parallel computing systems will require a paradigm shift in every aspect of design; be it for processor density, storage, communications, or a facility’s power needs. In this session, Thursday 1:30-3:00 pm, leaders from computing and communications will explore the challenges and opportunities facing the HPC community in the next 5-10 years as we re-examine the technologies and assumptions needed to move beyond petascale into the era of exascale computing.

Session: Big Science, Big Data I

Tuesday, Nov. 16
10:30 a.m.-Noon
Room 395-396


“Computing the Universe”
Salman Habib, Los Alamos National Laboratory


Biography: Salman Habib is a technical staff member at Los Alamos National Laboratory, where he has been since 1991, following a postdoc at the University of British Columbia, a Ph.D. in physics from the University of Maryland, and an undergraduate degree from I.I.T. Delhi. Habib’s research interests span a wide variety of topics, mostly concerned with dynamics of complex systems, both classical and quantum. He has worked on extending the reach of parallel supercomputing in new application directions such as beam physics, nonequilibrium quantum field theory, open quantum systems and quantum control, and stochastic partial differential equations. Habib’s interests in computational cosmology focus on precision structure formation probes of the “Dark Universe” – the dark energy and dark matter that dominate the mass-energy budget of the universe, but whose ultimate nature remains to be understood. Recently, Habib led the Roadrunner Universe project at Los Alamos, which resulted in the development of a hybrid petascale cosmology code for tracking the formation of structure in the universe.

Abstract: The search for understanding the ultimate nature of the universe and our place within it is older than recorded history. But the emergence of a compelling, scientifically valid, picture of the universe and its evolution, dates to less than a century ago. Observations performed within the past two decades have set cosmology on a tantalizing course. They reveal a mysterious universe, remarkable -- paradoxically -- both for the extent to which it can be understood and the extent to which it cannot. The leap in our ability to carry out wide and deep observations of the sky rests on the same solid-state technology that drives the development of supercomputing. The interpretation of cosmological surveys and understanding of much of the underlying astrophysics rely heavily on high-fidelity simulations of the observable universe. I will discuss the current status of computational cosmology and the directions in which it is headed.

“Cyber-Infrastructure for the LSST Data Management System”
Jeffrey P. Kantor, Large Synoptic Survey Telescope

Biography: Mr. Kantor is Project Manager, Large Synoptic Survey Telescope (LSST) Data Management. In this capacity Mr. Kantor is responsible for implementing computing and communications systems to provide calibration, quality assessment, processing, archiving, end user and external system access of astronomical image and engineering data produced by the LSST. After 4 years in the U.S. Army as a Russian Linguist/Signals Intelligence Specialist, starting in 1980 as an entry-level programmer, he has held positions at all levels in IT organizations, in many industry segments, including defense and aerospace, semiconductor manufacturing, geophysics, software engineering consulting, home and building control, consumer durables manufacturing, retail and eCommerce. Mr. Kantor has created, tailored, applied, and audited software processes for a wide variety of organizations in industry, government, and academia. He has been responsible for some of these organizations achieving ISO9000 Certification and SEI CMM Level 2 assessments. Mr. Kantor has also consulted with and trained over 30 organizations in object oriented analysis and design, Unified Modeling Language (UML), use-case driven testing, and software project management. Mr. Kantor enjoys spending time with his family, soccer (playing, refereeing, and coaching), and mountain biking.

Abstract: Data flow rates from astronomical surveys are growing rapidly, leading to enormous collections of raw pixel data. Simultaneously, this data is becoming freely available to the public. Although large astronomy surveys are typically funded to archive data, they are generally unable to fund collocated computing facilities that can process this data for all users. Therefore, a critical need is support for “computing at a distance.” The Large Synoptic Survey Telescope (LSST) Data Management System (DMS) provides such a cyber infrastructure. The DMS processes incoming images, produces transient alerts, archives over 50 petabytes of exposures, creates and archives an annual data release including catalogs of trillions of detected sources and billions of astronomical objects, makes LSST data available without a proprietary period, and facilitates analysis and production of user-defined data products with supercomputing resources. This paper discusses DMS distributed processing and data, with an emphasis on cyber infrastructure requirements and architecture.

“Session: Big Science, Big Data I

“
Tuesday, Nov. 16
1:30-3 p.m.
Room 395-396

“High-End Computing and Climate Modeling: Future Trends and Prospects”
Phillip Colella, Lawrence Berkeley National Laboratory

Biography: Phillip Colella received his A.B. (1974), M.A. (1976), and Ph.D. (1979) degrees from the University of California at Berkeley, all in applied mathematics. He has been a staff scientist at the Lawrence Berkeley National Laboratory and at the Lawrence Livermore National Laboratory and a professor in the Mechanical Engineering Department at the University of California, Berkeley.

He is currently a senior scientist and group leader for the Applied Numerical Algorithms Group in the Computational Research Division at the Lawrence Berkeley National Laboratory and a professor in residence in the Electrical Engineering and Computer Science Department at UC Berkeley. He has developed high-resolution and adaptive numerical algorithms for partial differential equations and numerical simulation capabilities for a variety of applications in science and engineering. He has also participated in the design of high-performance software infrastructure for scientific computing, including software libraries, frameworks, and programming languages. Honors and awards include the IEEE Computer Society’s Sidney Fernbach Award for high-performance computing in 1998, the SIAM/ACM prize (with John Bell) for computational science and engineering in 2003, election to the U.S. National Academy of Sciences in 2004, and election to the inaugural class of SIAM Fellows in 2009.

Abstract: Over the past few years, there has been considerable discussion of the change in high-end computing, due to the change in the way increased processor performance will be obtained: heterogeneous processors with more cores per chip, deeper and more complex memory and communications hierarchies, and fewer bytes per flop. At the same time, the aggregate floating-point performance at the high end will continue to increase, to the point that we can expect exascale machines by the end of the decade. In this talk, we will discuss some of the consequences of these trends for scientific applications from a mathematical algorithm and software standpoint. We will use the specific example of climate modeling as a focus, based on discussions that have been going on in that community for the past two years.

“Prediction of Earthquake Ground Motions Using Large-Scale Numerical Simulations”
Tom Jordan, Southern California Earthquake Center

Biography: Thomas H. Jordan is director of the Southern California Earthquake Center, a distributed organization involving more than 60 universities and research institutions, and is the spokesman for the SCEC Community Modeling Environment Collaboration. Jordan’s research is focused on system-level models of earthquake processes, earthquake forecasting and forecast evaluation, and full-3D waveform tomography. His scientific interests include continent formation and evolution, mantle dynamics, and statistical geology. He has authored approximately 200 scientific publications, including two undergraduate textbooks. He is a member of the California Earthquake Prediction Evaluation Council and serves on the Governing Board of the National Research Council and the Board of Directors of the Seismological Society of America. Jordan received his Ph.D. from Caltech in 1972. He taught at Princeton and the Scripps Institution of Oceanography before joining the MIT in 1984, where he served as the head of MIT’s Department of Earth, Atmospheric and Planetary Sciences from 1988 to 1998. He moved to USC in 2000 and became SCEC director in 2002. He has been awarded the Macelwane and Lehmann Medals of the American Geophysical Union and the Woollard Award of the Geological Society of America. He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society.

Abstract: Realistic earthquake simulations can now predict strong ground motions from the largest anticipated fault ruptures. Olsen et al. (this meeting) have simulated a M8 “wall-to-wall” earthquake on southern San Andreas fault up to 2-Hz, sustaining 220 teraflops for 24 hours on 223K cores of NCCS Jaguar. Large simulation ensembles (~10^6) have been combined with probabilistic rupture forecasts to create CyberShake, a physics-based hazard model for Southern California. In the highly-populated sedimentary basins, CyberShake predicts long-period shaking intensities substantially higher than empirical models, primarily due to the strong coupling between rupture directivity and basin excitation. Simulations are improving operational earthquake forecasting, which provides short-term earthquake probabilities using seismic triggering models, and earthquake early warning, which attempts to predict imminent shaking during an event. These applications offer new and urgent computational challenges, including requirements for robust, on-demand supercomputing and rapid access to very large data sets.

Session: Genomics-Driven Biology


Tuesday, Nov. 16
3:30-5 p.m.
Room 395-396

“Computing and Biology: Toward Predictive Theory in the Life Sciences”
Rick Stevens, Argonne National Laboratory/University of Chicago

Biography: Rick Stevens is associate laboratory director responsible for Computing, Environment, and Life Sciences research at Argonne National Laboratory and is professor of computer science at the University of Chicago. He also holds senior fellow appointments in the university’s Computation Institute (CI) and the Institute for Genomics and Systems Biology, where he teaches and supervises graduate students in the areas of computational biology, collaboration and visualization technology, and computer architecture. He co-founded and co-directed the CI, which provides an intellectual home for large-scale interdisciplinary projects involving computation. At Argonne in addition to directing the Mathematics and Computer Science Division for more than a decade, he developed Argonne’s research program in computational biology and created the Argonne Leadership Computing Facility. Recently he has been co-leading the DOE laboratory planning effort for exascale computing research aiming to develop computer systems one thousand times faster than current supercomputers and apply these systems to fundamental problems in science, including genomic analysis, whole cell modeling, climate models and problems in fundamental physics and energy technology development. He has authored and co-authored more than 120 papers and is a fellow of the American Association for the Advancement of Science. His research groups have won many national awards over the years, including an R&D 100 award for the Access Grid. He sits on many government, university, and industry advisory boards.

Abstract: Advances in genome sequencing have made it possible to sequence over 1,000 species, and during the next 5-10 years it should become routine to sequence humans as part of medical diagnostics and to sequence thousands more organisms important to energy, industry, and science. Genome analysis methods powered by petascale systems will make it possible to quickly go from DNA sequence to functional knowledge and predictive models. Making this a real-time process will radically change the uses of genomic sequencing data. Uncovering individual gene history and protein families will reveal the factors that influence molecular evolution, refine our strategies for databases of protein structures, and lay the foundation for understanding the role of horizontal gene transfer in evolution. Mathematical techniques and large-scale computing are revealing how to reconstruct cell networks, map them from one organism to another, and ultimately develop predictive models to shed light on evolution, ecosystems, development, and disease.

“High-Performance Genomics: Integrating Supercomputing into the Molecular Biology Laboratory”
Christopher Mueller, Life Technologies Corporation

Biography: Christopher Mueller is a senior staff scientist at Life Technologies, where he manages the bioinformatics group in Austin that supports sequencing and product development for RNA applications. He is a leader in the design and development of HPC solutions and strategy in the organization. His professional experience includes software development and architecture roles on large-scale web applications at MapQuest.com and Critical Path, Inc. and scientific computing and visualization at Research System, Inc. and Array Biopharma, Inc. He has worked as an independent consultant on high-performance computing projects and helped research labs integrate effective software engineering practices into their workflows. Mueller received his B.S in computer science from the University of Notre Dame in 1996 and his M.S. and Ph.D. in computer science with a minor in bioinformatics in 2007 from Indiana University. His research focused on computational biology, large-scale graph visualization, and programming paradigms for rapid development of high-performance software, the last of which resulted in the CorePy open source project.

Abstract: Recent advances in gene sequencing technologies have enabled the creation of instruments capable of quickly and affordably sequencing entire human genomes. Sequencing at this scale has changed the landscape of genomics by enabling individual scientists to integrate deep sequencing experiments into their research. To keep pace, labs have had to quickly upgrade their computational infrastructures, which often previously consisted of spreadsheets and lab notebooks, to include large storage systems, compute clusters, and informatics staff. At the same time, algorithm and tool developers have had to scale their solutions to process the new sequencing data. In this talk, I will discuss how high-performance computing methods are being applied to the design of systems and software used to support sequencing projects. With a focus on common genomic workflows, I will explore how sequencing data moves from the lab through analysis and what opportunities are available for both hardware and software HPC solutions.

Session: Heterogeneous Computing: Toward Exascale


Wednesday, Nov. 17
10:30 a.m.-Noon
Room 395-396

“Computer Software: The ‘Trojan Horse’ of HPC”

Steve Wallach, Convey Computer

Biography: Steve Wallach is a founder of Convey Computer Corporation and is an adviser to venture capital firms CenterPoint Ventures, Sevin-Rosen, and InterWest Partners. Previously, he served as vice president of technology for Chiaro Networks Ltd. and as co-founder, chief technology officer, and senior vice president of development of Convex Computer Corporation. After Hewlett-Packard Co. bought Convex, Wallach became chief technology officer of HP’s Enterprise Systems Group. Wallach served as a consultant to the U.S. Department of Energy’s Advanced Simulation and Computing Program at Los Alamos National Laboratory from 1998 to 2007. He was also a visiting professor at Rice University in 1998 and 1999 and was manager of advanced development for Data General Corporation. His efforts on the MV/8000 are chronicled in Tracy Kidder’s Pulitzer Prize-winning book, “The Soul of a New Machine.” Wallach, who has 33 patents, is a member of the National Academy of Engineering and an IEEE Fellow and was a founding member of the Presidential Information Technology Advisory Committee. He is the 2008 recipient of IEEE Computer Society’s Seymour Cray Award.

Abstract: High-performance computing has gone through numerous cycles in the never-ending quest for higher performance. Today, several of these cycles—in technical, social, and commercial realms—are converging to present a real challenge to reaching exascale-class computing. Numerous technical realities, including hitting the power/performance wall for commodity processors, limits on programming models and languages, and the often-announced end of the run for Moore’s law, are conspiring to make the road to exascale computing a steep climb. Unlike reaching a petaflops, attaining exascale performance requires new programming paradigms, hardware architectures, and interconnects. This talk will discuss a little bit of déja vu (what got us here), some of today’s technologies that will get us to exascale computing (including heterogeneous computing models), and some specific recommendations and examples.

“Higher-Level Programming Models for Heterogeneous Parallel Computing”
Wen-mei W. Hwu, University of Illinois at Urbana-Champaign

Biography: Wen-mei W. Hwu is a professor and holds the Sanders-AMD Endowed Chair in the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. His research interests are in the area of architecture, implementation, and software for high-performance computer systems. He is the director of the IMPACT research group. For his contributions in research and teaching, he received the ACM SigArch Maurice Wilkes Award, the ACM Grace Murray Hopper Award, the Tau Beta Pi Daniel C. Drucker Eminent Faculty Award, and ISCA Most Influential Paper Award. He is a fellow of IEEE and ACM. He leads the GSRC Concurrent Systems Theme. He co-directs the new $18M UIUC Intel/Microsoft Universal Parallel Computing Research Center with Marc Snir and is one of the principal investigators of the $208M NSF Blue Waters Petascale computer project. Hwu received his Ph.D. degree in computer science from the University of California, Berkeley.

Abstract: Modern computers are heterogeneous parallel computing systems with many CPU and GPU cores. While application developers for these systems are reporting 10X-100X speedup over sequential code on traditional microprocessors, the current practice of many-core programming based on OpenCL CUDA and OpenMP puts a strain on software development, testing, and support teams. According to the semiconductor industry roadmap, these processors could scale up to over 1,000X speedup over single cores by 2016, which will motivate an increasing number of developers to parallelize their applications. Today, application programmers have to understand the desirable parallel programming idioms, manually work around potential hardware performance pitfalls, and restructure their application design in order to achieve their performance objectives on many-core processors. In this presentation, I will discuss why advanced compiler functionalities have not found traction with the developer communities, what the industry is doing today to try to address the challenges, and how the academic community can contribute to this exciting revolution.

Session: Weather to Climate and Back Again I


Wednesday, Nov. 17
1:30-3 p.m.
Room 395-396

“Climate Computing: Computational, Data, and Scientific Scalability”
V. Balaji, Princeton University

Biography: V. Balaji heads the Modeling Systems Group serving developers of Earth system models at GFDL and Princeton University. With a background in physics and climate science, he has become an expert in the area of parallel computing and scientific infrastructure, providing high-level programming interfaces for expressing parallelism in scientific algorithms. He has pioneered the use of frameworks such as the Flexible Modeling System (FMS), as well as community standards such as ESMF and PRISM, allowing the construction of climate models out of independently developed components sharing a technical architecture. He has also spearheaded the use of curators (e.g., FMS Runtime Environment) for the execution of complex workflows to manage the complete climate modeling process. The Earth System Curator (U.S.) and Metafor (EU) projects, in which he plays a key role, have developed the use of a common information model that allows the execution of complex scientific queries on model data archives. Balaji plays advisory roles on NSF, NOAA and DOE planning and review panels, including the recent series of exascale workshops. He is committed to provide training in the use of climate models in developing nations, leading workshops to advanced students and researchers in South Africa and India.

Abstract: Climate modeling, in particular making projections of climate risks that have predictive skill on timescales of many years, presents several challenges that probe the limits of current computing. First is the challenge of computational scalability, where the community is adapting to an era in which computational power increases depend on concurrency of computing, not on raw clock speed. Second is the challenge of data scalability, as we increasingly depend on experiments that result in petabyte-scale distributed archives. Third is the challenge of scientific scalability: how to enable the legions of non-climate scientists – for example, those involved in international security, public health, and the environment – to benefit from climate data. This talk surveys some aspects of current computational climate research as it rises to meet these simultaneous challenges of computational, data, and scientific scalability.

“Community Climate Models: Is a New Paradigm of Model Development Possible?”
Richard B. Rood, University of Michigan

Biography: Richard Rood is a professor in the Department of Atmospheric, Oceanic and Space Sciences at the University of Michigan, where he teaches atmospheric science and climate dynamics. He initiated a cross-discipline graduate course, Climate Change: The Move to Action, which explores problem solving in climate change. His scientific background is in modeling ozone in the atmosphere, and more recently, climate modeling and data analysis. He has also served as chief of NASA’s Center for Computational Science. He has participated in international assessments of ozone depletion and the evaluation of the atmospheric impacts of aircraft. He was the lead of the delegation from the University of Michigan to the December 2009 Conference of Parties in Copenhagen. Rood is a fellow of American Meteorological Society and a winner of the World Meteorological Organization’s Norbert Gerbier Award. He served on National Research Council’s Board on Competitiveness of U.S. Climate Modeling (2000) and was the lead author of High-End Climate Science: Development of Modeling and Related Computing Capabilities, written while he was detailed to the White House Office of Science and Technology Policy. Currently he serves on the Advisory Panel for the National Center for Atmosphere Research Community Climate System Model and for the Climate Research and Modeling Program at the National Oceanographic and Atmospheric Administration. Rood also writes expert blogs on climate change science and problem solving for the Weather Underground and the American Meteorology Society.

Abstract: Based on projections from models that represent the Earth’s climate, we anticipate warming of the surface temperature, sea level rise, and systemic changes in weather. The cause of the warming is greenhouse gases from burning of fossil fuels. The implications of these projections motivate changes in how we obtain the energy, which stands at the foundation of societal success. As we move to manage warming and adapting to climate change, climate models move out of the venue of climate scientists to planning tools of resource managers and corporate strategists. The demands on the models and modelers far out scale the resources of the climate community. Despite the presence of successful community models, there is a need for models that are configurable and usable by communities of nonexperts. This talk discusses the challenges and possibilities of developing climate modeling systems that are open to broad community innovation, development, and application.

Session: Weather to Climate and Back Again II


Wednesday, Nov. 17
3:30-5 p.m.
Room 395-396

“Dedicated High-End Computing to Revolutionize Climate Modeling: An International Collaboration”
James Kinter, Institute of Global Environment and Society

Biography: James L. Kinter III is director of the Center for Ocean-Land-Atmosphere Studies (COLA) of the Institute of Global Environment and Society, where he manages all aspects of basic and applied climate research conducted by the Center. Kinter’s research includes studies of climate variability and change and climate predictability on seasonal and longer time scales. Of particular interest in his research are prospects for predicting El Niño and the extratropical response to tropical sea surface temperature anomalies using high-resolution coupled general circulation models of the Earth's atmosphere, oceans and land surface. Kinter is also an associate professor in the Climate Dynamics Ph.D. Program of the College of Science at George Mason University, where he has responsibilities for curriculum development and teaching undergraduate and graduate courses on atmospheric dynamics and climate change, as well as advising Ph.D. students. After earning his doctorate in geophysical fluid dynamics at Princeton University in 1984, Kinter served as a National Research Council Associate at NASA Goddard Space Flight Center and as a faculty member of the University of Maryland prior to joining COLA. Kinter, a fellow of the American Meteorological Society, has served on many national review panels for both scientific research programs and supercomputing programs for computational climate modeling.

Abstract: A collaboration of six institutions on three continents is investigating the use of dedicated HPC resources for global climate modeling. Two types of experiments were run using the entire 18,048‐core Cray XT‐4 at NICS from October 2009 to March 2010: (1) an experimental version of the ECMWF Integrated Forecast System, run at several resolutions down to 10 km grid spacing to evaluate high‐impact and extreme events; and (2) the NICAM global atmospheric model from JAMSTEC, run at 7 km grid resolution to simulate the boreal summer climate, over many years. The numerical experiments sought to determine whether increasing weather and climate model resolution to accurately resolve mesoscale phenomena in the atmosphere can improve the model fidelity in simulating the mean climate and the distribution of variances and covariances.

“Using GPUs for Weather and Climate Models”
Mark Govett, National Oceanographic and Atmospheric Administration

Biography: Mark Govett manages the Advanced Computing Section, a NOAA software group that supports weather model development and parallelization and explores advanced computing technologies including graphical processors, cloud computing, grid computing, portals, and software engineering to improve model performance, portability, and interoperability. Govett has worked at NOAA for 20 years in high-performance computing, code parallelization, and compiler development. During this time, he helped develop the Scalable Modeling System (SMS), a compiler and software library used to parallelize and run weather models on distributed-memory computers. He also developed a compiler to convert Fortran into CUDA, the language used by NVIDIA GPUs.

Abstract: With the power, cooling, space, and performance restrictions facing large CPU-based systems, graphics processing units (GPUs) appear poised to become the next-generation super-computers. GPU-based systems already are two of the top ten fastest supercomputers on the Top500 list, with the potential to dominate this list in the future. While the hardware is highly scalable, achieving good parallel performance can be challenging. Language translation, code conversion and adaption, and performance optimization will be required. This presentation will survey existing efforts to use GPUs for weather and climate applications. Two general parallelization approaches will be discussed. The most common approach is to run select routines on the GPU but requires data transfers between CPU and GPU. Another approach is to run everything on the GPU and avoid the data transfers, but this can require significant effort to parallelize and optimize the code.

Session: Climbing the Computational Wall


Thursday, Nov. 18
10:30 a.m.-Noon
Room 395-396

“The Square Kilometer Array: Taming Exascale Data Flows to Explore the Radio Universe”

Tim Cornwell, Australian Commonwealth Scientific & Research Organization

Biography: Tim Cornwell has a Ph.D. (1980) from the University of Manchester in England, where he worked on image-processing algorithms for radio synthesis telescopes. His first significant contribution was the development of the self-calibration algorithm widely used in radio astronomy. In 1980, he moved to Socorro, New Mexico, to work on the newly completed Very Large Array telescope run by the National Radio Astronomy Observatory. Over the 25 years at the NRAO, he made many contributions to radio astronomical techniques, including the key algorithms needed for wide fields of view. He also contributed in the areas of telescope design (for the Atacama Large Millimeter Array), telescope commissioning (the Very Long Baseline Array), observatory management, and software development. In 2004, he joined the Square Kilometre Array International Engineering Working Group, primarily to contribute to computing and algorithms. In 2005, he moved to Australia to take the lead role in computing for the Australia SKA Pathfinder. Since then he has been heavily involved in all aspects of the development of ASKAP computing, most particularly in the provision of high-performance computing for the telescope.

Abstract: Radio astronomy has been enabled by and is dependent on three technologies: digital signal processing, computing, and networking technologies. Radio astronomers working in an international consortium will build the world's largest, most sensitive, fastest radio telescope, the Square Kilometre Array. The SKA will be one of the foremost scientific instruments in the world, addressing some of the most important questions in astrophysics and cosmology. The SKA will naturally stress the state of the art in the three technology areas mentioned above. Among the most notable technical aspects of the SKA is the very large data rate -- exceeding 10 Pbit/s over long-distances -- digital signal processing requiring about 1 exaflop/s for processing into images, and 1 exabyte of science data per week. The use of supercomputers in the digital signal processing chain and the interaction between the computing architecture and the physics and algorithmics of the measurement process will be discussed.

“Applications of MADNESS”
Robert J. Harrison, Oak Ridge National Laboratory/University of Tennessee, Knoxville

Biography: Robert J. Harrison holds a joint appointment with Oak Ridge National Laboratory (ORNL) and the University of Tennessee, Knoxville. At the university, he is a professor in the chemistry department. At ORNL he is a corporate fellow and leader of the Computational Chemical Sciences Group in the Computer Science and Mathematics Division. He has many publications in peer-reviewed journals in the areas of theoretical and computational chemistry, and high-performance computing. His undergraduate (1981) and postgraduate (1984) degrees were obtained at Cambridge University, England. Subsequently, he worked as a postdoctoral research fellow at the Quantum Theory Project, Univ. Florida, and the Daresebury Laboratory, England, before joining the staff of the theoretical chemistry group at Argonne National Laboratory in 1988. In 1992, he moved to the Environmental Molecular Sciences Laboratory of Pacific Northwest National Laboratory, conducting research in theoretical chemistry and leading the development of NWChem, a computational chemistry code for massively parallel computers. In August 2002, he started the joint faculty appointment with UT/ORNL. In addition to his DOE Scientific Discovery through Advanced Computing (SciDAC) research into efficient and accurate calculations on large systems, he has been pursuing applications in molecular electronics and chemistry at the nanoscale. In 1999, the NWChem team received an R&D Magazine R&D100 award, and, in 2002, he received the IEEE Computer Society Sydney Fernbach award.

Abstract: MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a general numerical framework for the solution of integral and differential equations in 1-6+ dimensions. Initially developed for applications in chemistry, it is now finding applications in nuclear physics, solid state physics, atomic and molecular physics, and other disciplines. I will discuss some of these applications, the underlying numerical methods, and the MADNESS parallel runtime.

Session: Beyond Peta: HPC Futures


Thursday, Nov. 18
1:30-3 p.m.
Room 395-396

“A System Vendor’s Perspective on the Coming Challenges in HPC”

Steve Scott, Cray Inc.

Biography: Steve Scott is senior vice president and chief technology officer at Cray Inc., where he has been since receiving his Ph.D. in computer architecture from the University of Wisconsin at Madison in 1992. Scott was the chief architect of multiple systems at Cray, architected the routers for the Cray XT line and follow-on systems, and is leading the Cray Cascade project funded by the DARPA High Productivity Computing Systems program. He holds 23 U.S. patents and has served on numerous program committees. He was the 2005 recipient of the ACM Maurice Wilkes Award and the IEEE Seymour Cray Computer Engineering Award.

Abstract: The early part of this decade brought a technology inflection point that has already rippled dramatically through the processor and system landscape. The coming decade promises further changes that will significantly change the design and operation of HPC systems. Power efficiency, concurrency, and resiliency constraints will drive changes to the underlying processor and system architecture, and these will create major challenges for programming the systems. A careful co-design of the hardware, system software, and applications will be required to achieve our collective goal of sustained exascale computing with feasible power and acceptable productivity. This talk will provide an overview of the key challenges we are facing and Cray's perspective on how to best proceed over the coming five to ten years.

“The Road to Exaflops” “
Andy Bechtolsheim, Arista Networks

Biography: As chief development officer, Andy Bechtolsheim is responsible for the overall product development and technical direction of Arista Networks. Previously he was a founder and chief system architect at Sun Microsystems, where most recently he was responsible for industry standard server architecture. He was also a founder and president of Granite Systems, a Gigabit Ethernet startup acquired by Cisco Systems in 1996. From 1996 until 2003 Bechtolsheim served as VP/GM of the Gigabit Systems Business Unit at Cisco, which developed the very successful Catalyst 4500 family of switches. He was also a founder and president of Kealia, a next-generation server company acquired by Sun in 2004. Bechtolsheim received an M.S. in computer engineering from Carnegie Mellon University in 1976 and was a Ph.D. student at Stanford University from 1977 until 1982.

Abstract: I will discuss an updated model for the road to exaflops, including the latest roadmaps for semiconductor technology, computational power efficiency, memory, storage, interconnects, and packaging. Good progress is being made in a several areas such that the exaflops goal may be achieved sooner than previously predicted.

Questions: masterworks@info.supercomputing.org

Submission Site: http://submissions.supercomputing.org

Click here to view the complete Conference Schedule

database connection failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)