iVEC@UWA supercomputer ushers in new era of data-intensive research
Australia moves a step closer to the top ranks of global supercomputing with The University of Western Australia’s purchase of the “Fornax” supercomputer, allowing scientists to explore new vistas of high-powered data-intensive research.
The supercomputer purchase is part of an $80m Australian Government Super Science Initiative to bolster Australia’s bid for the Square Kilometre Array through the creation of the Pawsey Centre, a petascale supercomputing facility supporting radio astronomy and boosting Australian supercomputing resources for data-intensive research in areas such as nanoscience, geoscience and other computational communities.
Fornax is Latin for ‘furnace’ and is the name of a southern hemisphere constellation identified by Nicolas Louis de Lacaille in 1756. This name is representative of the supercomputer’s capability for dealing with data-intensive problems, which is expected to be of particular use to the radio astronomy computing community.
Representing the second pathfinder system in the Pawsey Centre project that will see a purpose-built supercomputer facility constructed at CSIRO’s Australian Resources Research Centre (ARRC) in Kensington, Fornax will be located in The University of Western Australia’s Physics Building as part of the iVEC@UWA Facility. As part of the Pawsey Centre project, Fornax will be managed and operated by WA supercomputing leader iVEC.
The supercomputer system procured from SGI comprises 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU and 48 GB RAM, resulting in a system containing 1152 cores and 96 GPUs.
The system also has a 500 TB global filesystem, in addition to each node having 7 TB of local disk space. One distinguishing feature of the system, allowing it to tackle data-intensive problems, is the presence of two InfiniBand networks. The first allows each node to access the global filesystem, while the second allows nodes to access the local disks on neighbouring nodes. This dual-rail system also allows separation of MPI traffic from storage traffic.
The networking component comprises a Cisco Nexus 7009 switch located at UWA to provide layer 3 services, and two Cisco Nexus 5548 layer 2 switches for connectivity to the front-ends of the SGI compute system, along with passive 8-channel coarse and dense wave division multiplexors providing multiple 1 Gbps and 10 Gbps connections across a single fibre pair back into the core of the iVEC metropolitan area network at the ARRC facility.
The Nexus 7009 switch includes dual supervisors and M1 linecards for redundancy, and supports a total of 80 SFP- and X2-based 10 GbE interfaces. The Nexus 5548 switches provide a total of 96 10 GbE ports at full line rate, and will be connected via a virtual port channel to the Nexus 7009. The SGI front-end nodes will be dual-connected to the Nexus 5548 switches for redundancy.
The procurement also includes concomitant upgrades to the core of the iVEC network for connectivity to the Nexus 7009, and to increase the bandwidth into iVEC's Petascale Data Store, also located at ARRC.
Professor Andrew Rohl, iVEC Executive Director, said: “Fornax is a machine tailored for data-intensive computing in such areas as radio astronomy and the geosciences.
“The combination of GPUs and fast local disk distributed between neighbouring compute nodes provides a unique system for our data-intensive researchers.”