100GE Network DemonstrationShowfloor to Univ of Victoria |
Caltech HEP team started testing the 40GE ConnextX-2 NICs from Mellanox before SC10 last year, however due to PCIe Gen2 limitations the NICs could only perform up to 24Gbps in each direction. Detailed presentation on the 40GE network testing with results is available for download.
This year Sandy Bridge based Server motherboards with PCIe Gen3 became available from several vendors including SuperMicro and TYAN. Mellanox also released ConnectX-3, a dual port 40GE NIC which is PCIe Gen3 compliant. All these components that were received for the demonstrations were engineering samples and still there were many unknowns problems associated with them. We found system crashing, kernel panics etc.
Network Switching portion remained very solid. Force10 Z9000 and Brocade MLXe-4 switch-routers were used to connect servers at 40GE and to also connect at 100GE over the WAN, see the interconnect diagram.
As shown in the 100GE WAN diagram, a single OTU4 link was used between Ciena OME6500 in Scinet and OME6500 in BCNet in Canada over a distance of 212kms.
Memory to Memory Network Tests using TCP and FDT
During these tests a set of 2 x 40GE Gen3 and 2 x 40GE Gen2 higly tuned servers were used to receive network traffic at the rate approx 98Gbps. Several other servers were used to send the traffic to servers in Univ of Victoria where only 10GE based Dell R710 servers were available.
Disk to Disk Tests using TCP and FDT
We were able to write at the speed of 60Gbps on several SuperMicro and Dell Servers at the show floor. While running an overnight disk write test, SSD Drives started wearing out on constant overwrite as shown in the graph below.
Single 40GE Server Receive Test
It was important to understand how much network traffic one of these 40GE server can receive. A day before the end of show we received new firmware for the Mellanox NIC and BETA ethernet driver. After the upgrade NIC showed very few framing errors at the rate of about 36.8Gbps.