SuperComputing 2010
Moving Towards Terabit/Sec Transfers



Collaborative Data Transfers

 

 

1. Caltech

We can do Storage (hadoop) -> disk at ~2.4Gbps.  This rate is both the rate that the disks at SC can write, and about the rate that we can get data out of Hadoop.  As was the case last year, the rates taper off slightly over time, down to 1.5-2Gbps.  Running multiple transfers (4 hosts at Caltech -> 4 hosts at SC) seems to also negatively affect the transfer rates, though I can't give a quantitative answer as to how much until I run more tests. Click for complete report.
 

2. UMichigan and MSU

Using FDT we were able to read from 2 (out of 6) shelves on MSUFS12 and write to 2 (out of 6 shelves) on MSUFS13 (see attached msufs12_13_fdt_2.png) at 7 Gbps. We then started reading from two shelves on MSUFS13 and writing to 2 shelves on MSUFS12 also at 7 Gbps (at the same time as the prior transfers were going on). Click for complete report.


 

3. UCSD

In the last two hours of SC10 we finally accomplished our mission by adding 12 more hosts at UCSD for data transfer from UCSD to SC10, which provides 7-8 Gb/s throughput to 3 hosts at SC10 via ESnet. Click for complete report.

In the last two hours of SC10 we finally accomplished our mission.

1. Adding 12 more hosts at UCSD for data transfer from UCSD to SC10, which provides 7-8 Gb/s throughput to 3 hosts at SC10 via ESnet.

 

4. BNL

 

5. KNU

 

6. Brazil

 

7. UFL

 

8. FIU