Super Computing 2009

Moving towards Terabit/sec Scientific Dataset Transfers: the LHC Challenge

This is the last Supercomputing conference before CERN’s Large Hadron Collider starts producing collisions, whereupon the experiments there will acquire massive amounts of real data: of order 10 Petabytes per year in 2010 and far more in later years. In our Bandwidth Challenge entry we will demonstrate Storage to Storage physics dataset transfers of up to 100 Gbps sustained in one direction, and well above 100 Gbps in total bidirectionally,

Abstract

Physicists at the LHC will break new ground at the high energy frontier when the accelerator and the experiments CMS and ATLAS resume operation in Nov 2009, searching for the Higgs particles thought to be responsible for mass in the universe, for supersymmetry, and for other fundamentally new phenomena bearing on the nature of matter and spacetime. In order to fully exploit the potential wealth for scientific discoveries, a global-scale

Moving Towards Terabit/Sec Transfers

Caltech Won the SuperComputing ’09 Bandwidth Challenge ! Press release: BWCPressRelease.pdf The SuperComputing Conference 2009 was held in Portland, Oregon(USA) this year. An international team of physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech) and partners from the University of Michigan, Fermilab, Brookhaven National Laboratory, CERN, San Diego (UCSD), Florida (UF and FIU), Brazil (Rio de Janeiro State University, UERJ, and State Universities of São Paulo,

FDT

FDT – One of the key advances in this demonstration was Fast Data Transport (FDT; http://monalisa.cern.ch/FDT), a Java application developed by the Caltech team in close collaboration with the Polytehnica Bucharest team. FDT runs on all major platforms and uses the NIO libraries to achieve stable disk reads and writes coordinated with smooth data flow across long-range networks. The FDT application streams a large set of files across an open TCP socket,