HPC DLX 2010

UK HPC - Hardware

The Lipscomb HPC Cluster (dlx.uky.edu) was built for UK by Dell Inc and is rated at just over 40 Teraflops.

Basic Nodes

  • 376 Nodes (4512 cores); ~39.34 Teraflops
  • Intel Xeon X5650 (Westmere) @ 2.66 GHz
  • 2 sockets/node x 6 cores/socket = 12 cores/node
  • 36 GB/node
  • 250 GB local (internal) SAS disk
  • Linux OS

Hi-Mem (Fat) Nodes

  • 8 Nodes (256 cores); ~0.95 Teraflops
  • Intel Xeon X7560 (Nehalem) @ 2.66 GHz
  • 4 sockets/node x 8 cores/socket = 32 cores/node
  • 512 GB/node
  • 1 TB mirrored local (internal) SAS disk
  • Linux OS

Login Nodes

  • 2 Nodes (24 cores)
  • Intel Xeon X5650 (Westmere) @ 2.66 GHz
  • 2 sockets/node x 6 cores/socket = 12 cores/node
  • 36 GB/node
  • 250 GB local (internal) SAS disk
  • Linux OS

Interconnect

  • Mellanox Quad Data Rate Infiniband switch
  • 2:1 over-subscription

Global cluster filesystem

  • Panasas ActiveScale
  • 260 TB raw with 208 TB usable
  • 7.8 GB/s throughput and 79,300 IO/S

Other Information

  • Fills most of 9 equipment racks
  • Uses about 180 KW when loaded
  • Dedicated TSM/HSM node for fast access to near-line storage

Pictures

Basic Nodes (front)
Basic Nodes Front
Basic Nodes (back)
Basic Nodes Back
Hi-Mem (Fat) Nodes
Hi-Mem Nodes
Login and Admin Nodes
Login and Admin Nodes
IB Switches (front)
IB Switches 1
IB Switches (back)
IB Switches 2
Panasas Disk Store
IB Switches 2
Panasas Disk Store (more disks)
IB Switches 2
Racks
IB Switches 2

859-218-HELP (859-218-4357) 218help@uky.edu

Text Only Options

Top of page


Text Only Options

Open the original version of this page.

Usablenet Assistive is a UsableNet product. Usablenet Assistive Main Page.