Abstract

Physicists at the LHC will break new ground at the high energy frontier when the accelerator and the experiments CMS and ATLAS resume operation in Nov 2009, searching for the Higgs particles thought to be responsible for mass in the universe, for supersymmetry, and for other fundamentally new phenomena bearing on the nature of matter and spacetime. In order to fully exploit the potential wealth for scientific discoveries, a global-scale grid system has been developed that aims to harness the combined computational power and storage capacity of 11 major “Tier1” center and 120 “Tier2” centers sited at laboratories and universities throughout the world, in order to process, distributed and analyze unprecedented data volumes, rising from tens to 1000 petabytes over the coming years.

This demonstration will preview the efficient use of long range high capacity networks which are at the heart of this system, using state of the art applications developed by Caltech and its partners for: high speed data transport where a single rack of low cost servers can match, if not overmatch all of the 10 Gigabit/sec wide area network links coming into SCiNet at SC10; real-time distributed physics applications, grid and network monitoring systems; and Caltech’s recently released EVO system for global-scale collaboration.

Continental and transatlantic networking for the LHC currently involves more than a dozen 10 Gbps links. This number will approximately double by 2012. Saturation of 10 Gbps links storage to storage has already been demonstrated in a production-ready setting by Fermilab and some of the universities hosting Tier2 centers, notably Nebraska. In a recent demonstration with Internet2, these flows were switched using Fermilab and Caltech’s “LambdaStation” software.