SC is the International Conference for
 High Performnance Computing, Networking, Storage and Analysis

• Conference Overview• Registration• Schedule• SCinet• SCinet Research Sandbox• SCinet Research Sandbox Projects• Wireless Network Policy • Important Dates




2010 SCinet Research Sandbox projects



Using 100G Network Technology in Support of Petascale Science


NASA, in collaboration with a set of partners, will be conducting a set of individual experiments and demonstrations that collectively are titled “Using 100G Network Technology in Support of Petascale Science.” The partners include the iCAIR, National Center for Data Mining (NCDM), NOAA, Mid-Atlantic Crossroads (MAX), and National LambdaRail (NLR), as well as vendors Ciena, Cisco, ColorChip and Extreme Networks who most generously are allowing some of their leading-edge network technologies to be included. The experiments and demonstrations will feature different approaches to 100G networking across the SRS infrastructure between the NASA exhibit booth and the NCDM/iCAIR exhibit booth using sets of NASA-built, relatively inexpensive, net-test-workstations that are capable of demonstrating >100Gbps uni-directional nuttcp-enabled memory-to-memory data flows, 80-Gbps aggregate bidirectional memory-to-memory data transfers, or near 20-Gbps unidirectional disk-to-disk data copies.


Next Generation Wide Area File Transfer System


On the cusp of discoveries at the Large Hadron Collider (LHC), the high energy physics team at the California Institute of Technology (Caltech) booth, will demonstrate global data distribution, analysis, and visualization of LHC proton-proton collisions in a new energy region. The demonstration will use state-of-the-art high throughput open source long-distance data transfer applications, dynamic network circuits as well as the latest optical network and server technologies, as well as the Enabling Virtual Organizations (EVO) that represents the state-of-the-art in global-scale collaboration for major science projects. Caltech will work in cooperation with STARlight, SCinet, CERN, NLR, Internet2, ESnet, SURFNet, CIENA, Mellanox, Cisco, Force10, and many others.


Data Intensive Computing Environment/Obsidian Strategics SCinet Sandbox Project (DOS3)


This Data Intensive Computing Environment (DICE)/Obsidian Strategics SCinet Sandbox Project will focus on performance of applications over a wide area Infiniband network from three separate sites to the conference floor at SC2010 in New Orleans. Technologies involved will include Obisidian ES Infiniband extenders, QDR/DDR/SDR Infiniband, 10 and 100 GbE Ethernet LAN and WAN technologies, wide area file system clients, pNFS technologies, Infiniband attached storage, solid state disk technologies, various HPC I/O-intensive applications, and several data transfer applications. Organizations also involved with this project include NASA Goddard, The Ohio State University, Lawrence Livermore National Laboratory, and several technology vendors, including BlueArc, Data Direct Networks, and Brocade.


Network Security Analysis


A problem in near real-time network security analysis is that what you can watch, and hence analyze, is normally limited to the performance that a single CPU can provide. Technologies such as IDS clustering bring enough instances of CPU's to bear for simple counter based analysis such as scan detection and simple table synchronization, but tend to have performance issues when large volumes of data need to be shared amongst the backend computational nodes. A question remains: what if the data could be brought to a dedicated computational resource? In essence you would bring the data to the CPU rather than bringing the CPU to the data. Take the output from a traditional IDS system or a flow analysis tool and bring the data to a high-performance computing cluster via a remote networked file system such as GPFS. A team of researchers from the National Energy Research Scientific Computing Center (NERSC) will be using the current set of SCinet network security systems as the data source. To move the data to the computational resources the team will be using GPFS to share a dedicated file system physically located at NERSC.

   Sponsors    IEEE    ACM