PathScale InfiniPath Interconnect
As the use of large clusters gains ground in academia and moves from the scientific world to the business world, many administrators are looking for ways to increase performance without significantly increasing the cost per node. Some may focus on CPU power/speed or the amount of RAM per node, relatively expensive components, to increase their horsepower. PathScale (recently acquired by QLogic) is taking a different approach, instead focusing on unleashing the computational power already contained in the cluster as a whole by allowing the “thoroughbred” processors built by Intel and AMD to move all the messages they are capable of generating.
By focusing on dramatically increasing the message traffic between nodes and by reducing the latency of those messages, applications running on clusters are able to run faster and scale higher than previously possible. And, the increased performance is achieved with the combination of inexpensive x86 servers with standard InfiniBand adapters and switches.
The InfiniPath InfiniBand cluster interconnect is available in two flavors: PCI Express for ubiquitous deployments with any motherboard and any processor, and directly connected to the HyperTransport bus for the absolute lowest latency. This article deals with the InfiniPath HyperTransport (or HTX) product line. Servers with motherboards that support InfiniPath HTX are available from more than 25 different system vendors, including Linux Networx, Angstrom, Microway, Verari and Western Scientific. In the near future, servers with HTX slots could be available from the larger tier-one computer system suppliers. Motherboards with HTX slots are currently shipping from Iwill (the DK8-HTX) and Supermicro (H8QC8-HTe), with additional offerings from Arima, ASUS, MSI and others coming soon. InfiniPath adapters, which can be used with just about any flavor of Linux, can be connected to any InfiniBand switch from any vendor. Also, for mixed deployments with InfiniBand adapters from other vendors, InfiniPath supports the OpenFabrics (formerly OpenIB) software stack (downloadable from the PathScale Web site).
What the InfiniPath HTX adapter does better than any other cluster interconnect is accept the millions of messages generated every second by fast, multicore processors and gets them to the receiving processor. Part of the secret is removing all the delays associated with bridge chips and the PCI bus, because traffic is routed over the much faster HyperTransport bus. In real-world testing, this produces a two- to three-times improvement in latency, and in real-world clustered applications, an increase in messages per second of ten times or more.
Message transmission rate is the unsung hero in the interconnect world, and by completely re-architecting its adapter, InfiniPath beats the next-best by more than ten times. Where the rest of the industry builds off-load engines, miniature versions of host servers with an embedded processor and separate memory, InfiniPath is based on a very simple, elegant design that does not duplicate the efforts of the host processor. Embedded processors on interconnect adapter cards are only about one-tenth the speed of host processors so they can't keep up with the number of messages those processors generate. By keeping things simple, InfiniPath avoids wasting CPU cycles on pinning cache and other housekeeping chores, required with off-load engines, and instead does real work for the end user. The beauty of this approach is that it not only results in lower CPU utilization per MB transferred, but it also has a lower memory footprint on host systems.
The reason a two- or three-times improvement in latency has such a large effect on the message rate (messages per second) is that low latency reduces the time that nodes spend waiting for the next communication at both ends, so all the processors substantially reduce wasted cycles spent waiting on adapters jammed with message traffic.
What does this mean for real-world applications? It will depend on the way the application uses messages, the sizes of those messages and how well optimized it is for parallel processing. In my testing, using a 17-node (16 compute nodes and one master node) cluster, I got a result of 5,149.154 MB/sec using the b_eff benchmark. This compares with results of 1,553–1,660 MB/sec for other InfiniBand clusters tested by the Daresbury Lab in 2005, and with a maximum of 2,413 MB/sec for any other cluster tested. The clusters tested all had 16 CPUs.
See Listing 1 for the results of the b_eff benchmark. The results of the Daresbury Lab study are available at www.cse.clrc.ac.uk/disco/Benchmarks/commodity.2005/mfg_commodity.2005.pdf, page 21.
Listing 1. b_eff output
The effective bandwidth is b_eff = 5149.154 MByte/s on 16 processes ( = 321.822 MByte/s * 16 processes) Ping-pong latency: 1.352 microsec Ping-pong bandwidth: 923.862 MByte/s at Lmax= 1.000 MByte (MByte/s=1e6 Byte/s) (MByte=2**20 Byte) system parameters : 16 nodes, 128 MB/node system name : Linux hostname : cbc-01 OS release : 2.6.12-1.1380_FC3smp OS version : #1 SMP Wed Oct 19 21:05:57 EDT 2005 machine : x86_64 Date of measurement: Thu Jan 12 14:20:52 2006
Most vendors do not publish their message rate, instead putting out their peak bandwidth and latency. But bandwidth varies with the size of the message, and peak bandwidth is achieved only at message sizes much larger than most applications generate. For most clustered applications, the actual throughput of the interconnect is a fraction of peak, because few clustered applications pass large messages back and forth between nodes. Rather, applications running on clusters pass a large number of very small (8–1,024 byte) messages back and forth as nodes begin and finish processing their small pieces of the overall task.
This means that for most applications, the number of simultaneous messages that can be passed between nodes, or message rate, will tend to limit the performance of the cluster more than the peak bandwidth of the interconnect.
As end users attempt to solve more granular problems with bigger clusters, the average message size goes down and the overall number of messages goes up. According to PathScale's testing with the WRF modeling application, the average number of messages increases from 46,303 with a 32-node application to 93,472 with a 512-node application, while the mean message size decreases from 67,219 bytes with 32 nodes to 12,037 bytes with 512 nodes. This means that the InfiniPath InfiniBand adapter will become more effective as the number of nodes increases. This is borne out in other tests with large-scale clusters running other applications.
For developers, there is little difference between developing a standard MPI application and one that supports InfiniPath. Required software is limited to some Linux drivers and the InfiniPath software stack. Table 1 shows the versions of Linux that have been tested with the InfiniPath 1.2 release. PathScale also offers the EKOPath Compiler Suite version 2.3, which includes high-performance C, C++ and Fortran 77/90/95 compilers as well as support for OpenMP 2.0 and PathScale-specific optimizations. But the compiler suite is not required to develop InfiniPath applications because the InfiniPath software environment supports gcc, Intel and PGI compilers as well. The base software provides an environment for high-performance MPI and IP applications.
Table 1. The InfiniPath 1.2 release has been tested on the following Linux distributions for AMD Opteron systems (x86_64).
Linux Release | Version Tested |
---|---|
Red Hat Enterprise Linux 4 | 2.6.9 |
CentOS 4.0-4.2 (Rocks 4.0-4.2) | 2.6.9 |
Red Hat Fedora Core 3 | 2.6.11, 2.6.12 |
Red Hat Fedora Core 4 | 2.6.12, 2.6.13, 2.6.14 |
SUSE Professional 9.3 | 2.6.11 |
SUSE Professional 10.0 | 2.6.13 |
The optimized ipath_ether Ethernet driver provides high-performance networking support for existing TCP- and UDP-based applications (in addition to other protocols using Ethernet), with no modifications required to the application. The OpenIB (Open Fabrics) driver provides complete InfiniBand and OpenIB compatibility. This software stack is freely available as a download on their Web site. It currently supports IP over IB, verbs, MVAPICH and SDP (Sockets Direct Protocol).
PathScale offers a trial program—you can compile and run your application on its 32-node cluster to see what performance gains you can attain. See www.pathscale.com/cbc.php.
In addition, you can test your applications on the Emerald cluster at the AMD Developer Center, which offers 144 dual-socket, dual-core nodes, for a total of 576 2.2GHz Opteron CPUs connected with InfiniPath HTX adapters and a SilverStorm InfiniBand switch.
Tests performed on this cluster have shown excellent scalability at more than 500 processors, including the LS-Dyna three-vehicle collision results posted at www.topcrunch.org. See Table 2 for a listing of the top 40 results of the benchmark. Notice that the only other cluster in the top ten is the much more expensive per node Cray XD1 system.
Table 2. LS-Dyna Three-Vehicle Collision Results, Posted at www.topcrunch.org
Result (lower is better) | Manufacturer | Cluster Name | Processors | Nodes x CPUs x Cores |
---|---|---|---|---|
184 | Cray, Inc. | CRAY XDI/RapidArray | AMD dual-core Opteron 2.2GHZ | 64 x 2 x 2 = 256 |
226 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron2.2GHz | 64 x 2 x 1 = 128 |
239 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 32 x 2 x 2 = 128 |
239 | Rackable Systems/AMD Emerald/PathScale | InfiniPath/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 64 x 2 x 2 = 256 |
244 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.4GHz | 64 x 2 x 1 = 128 |
258 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 48 x 2 x 1 = 96 |
258 | Rackable Systems/AMD Emerald/PathScale | Infiniband/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 64 x 1 x 2 = 128 |
268 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.4GHz | 48 x 2 x 1 = 96 |
268 | Rackable Systems/AMD Emerald/PathScale | InfiniPath/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 32 x 2 x 2 = 128 |
280 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 24 x 2 x 2 = 96 |
294 | Rackable Systems/AMD Emerald/PathScale | InfiniPath/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 48 x 1 x 2 = 96 |
310 | Galactic Computing (Shenzhen) Ltd. | GT4000/InfiniBand | Intel Xeon 3.6GHz | 64 x 2 x 1 = 128 |
315 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 32 x 2 x 1 = 64 |
327 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.4GHz | 32 x 2 x 1 = 64 |
342 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 16 x 2 x 2 = 64 |
373 | Rackable Systems/AMD Emerald/PathScale | InfiniPath/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 32 x 1 x 2 = 64 |
380 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.2GHz | 32 x 2 x 1 = 64 |
384 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 24 x 2 x 1 = 48 |
394 | Rackable Systems/AMD Emerald/PathScale | InfiniPath/SilverStorm InfiniBand switch | AMD dual-core Opteron 2.2GHz | 16 x 2 x 2 = 64 |
399 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.4GHz | 24 x 2 x 1 = 48 |
405 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.2GHz | 32 x 2 x 1 = 64 |
417 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 12 x 2 x 2 = 48 |
418 | Galactic Computing (Shenzhen) Ltd. | GT4000/InfiniBand | Intel Xeon 3.6GHz | 32 x 2 x 1 = 64 |
421 | HP | Itanium 2 CP6000/InfiniBand TopSpin | Intel Itanium 2 1.5GHz | 32 x 2 x 1 = 64 |
429 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.2GHz | 32 x 2 x 1 = 64 |
452 | IBM | e326/Myrinet | AMD Opteron 2.4GHz | 32 x 2 x 1 = 64 |
455 | Cray, Inc. | CRAY XD1 RapidArray | AMD Opteron 2.2GHz | 24 x 2 x 1 = 48 |
456 | HP | Itanium 2 Cluster/InfiniBand | Intel Itanium 2 1.5GHz | 32 x 2 x 1 = 64 |
480 | PathScale, Inc. | Microway Navion/PathScale InfiniPath/SilverStorm IB switch | AMD Opteron 2.6GHz | 16 x 2 x 1 = 32 |
492 | Appro/Level 5 Networks | 1122Hi-81/Level 5 Networks - 1Gb Ethernet NIC | AMD dual-core Opteron 2.2GHz | 16 x 2 x 2 = 64 |
519 | HP | Itanium 2 CP6000/InfiniBand TopSpin | Intel Itanium 2 1.5GHz | 24 x 2 x 1 = 48 |
527 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 16 x 2 x 1 = 32 |
529 | HP | Opteron CP4000/TopSpin InfiniBand | AMD Opteron 2.6GHz | 16 x 2 x 1 = 32 |
541 | Cray, Inc. | CRAY XD1/RapidArray | AMD Opteron 2.4GHz | 16 x 2 x 1 = 32 |
569 | Cray, Inc. | CRAY XD1/RapidArray | AMD dual-core Opteron 2.2GHz | 8 x 2 x 2 = 32 |
570 | HP | Itanium 2 Cluster/InfiniBand | Intel Itanium 2 1.5GHz | 24 x 2 x 1 = 48 |
584 | Appro/Rackable/Verari | Rackable and Verari Opteron Cluster/InfiniCon InfiniBand | AMD Opteron 2GHz | 64 x 1 x 1 = 64 |
586 | IBM | e326/Myrinet | AMD Opteron 2.4GHz | 16 x 2 x 1 = 32 |
591 | Self-made (SKIF program)/United Institute of Informatics Problems | Minsk Opteron Cluster/InfiniBand | AMD Opteron 2.2GHz (248) | 35 x 1 x 1 = 35 |
Logan Harbaugh is a freelance reviewer and IT consultant located in Redding, California. He has been working in IT for 20 years and has written two books on networking, as well as articles for most of the major computer publications.