HPC History at UK
This page is under construction
The University of Kentucky has been a leader in HPC for scientific research and teaching since 1987, and its supercomputers have ranked as high as #66 on a list of the Top 500 supercomputers world-wide. Our supercomputers have always been available to any faculty at UK and at any research institution in the state, and our researchers collaborate with other scientists around the globe. The UKIT budget for supercomputing has stayed constant at $1.3M per year, but the increase in price / performance popularly known as Moore's Law has allowed us to increase our computing power at a compound rate of about 75% per year.
If you have any information, pictures, or stories to contribute, then please let us know.
UK High Performance Computing Over the YearsDate &
Rank1 Cores &
unranked 4736 cores
??? TFlop DLX 2012. Dell PowerEdge C6100 and R710 Cluster.Basic Nodes: aaa Hi-Mem 'Fat' Nodes: aaa Login/Admin Nodes: Switch Fabric: aaa Global cluster filesystem: aaa
#259 4428 cores
47.11 TFlopDLX 2010. Dell PowerEdge C6100 and R710 Cluster. Designated the Lipscomb HPC Cluster after UK alumnus and Nobel Laureate Dr. William N. Lipscomb, Jr. Basic Nodes: 376 nodes (4512 cores), each with dual Intel Xeon X5650 (Westmere) @ 2.66 GHz (12 cores), 36 GB, 250 GB SAS disk. Hi-Mem 'Fat' Nodes: 8 nodes (256 cores), each with quad Intel Xeon X7560 (Nehalem) @ 2.66 GHz (32 cores), 512 GB per node, 1 TB mirrored SAS disk. Login/Admin Nodes: 2 login and 2 admin nodes (48 cores), each with Intel Xeon X5650 (Westmere) @ 2.66 GHz (12 cores), 36 GB, 250 GB SAS disk. Switch Fabric: QDR Infiniband
Global cluster filesystem:Panasas ActiveScale, 260 TB raw, 208 TB usable, 7.8 GBps throughput, 79,300 IOPS.
16.32 TFlop BCX. IBM BladeCenter HS21 Cluster, Xeon dual core 3.0 GHz, Infiniband
1.741 TFlop SDX. HP Cluster Platform 3000 DL140 Xeon 3.4 GHz Myrinet
1.32 TFlop HP Integrity Superdome, 1.5 GHz Madison CPUs, HyperFabric. UK's first computer over a TeraFlop. Cluster of four large shared memory computers, one of which was partitioned (4+28, 64, 64, and 64 CPUs). In the November, 2003, Top500 list it ranked #117 with a R/peak rating of 1049/1320 gigaflops.
(672 GFlop) HP SuperDome 750 MHz/HyperPlex. Cluster of four large shared memory computers (32, 64, 64, and 64 CPUs). In the June, 2002, Top500 list it ranked #109 with R/peak ratings of 431.7/672 gigaflops. In November, 2002, it ranked #141. In June, 2003 it ranked #238.
(168 GFlop) HP N4000 440 MHz/HyperPlex. UK's first supercomputing cluster. Twelve 8 CPU shared memory computers. In the June, 2000, Top500 list it ranked #201 with R/peak ratings of 63/168 gigaflops. In November, 2000, it ranked #399.
(46.8 GFlop) HP/Convex Exemplar X-Class. Shared memory architecture. In the June, 1998, Top500 list it ranked #185 with R/peak ratings of 27.56/46.08 gigaflops. In November, 1998, it was #249. In June, 1999, it was #407.
(7.68 GFlop) HP/Convex SPP1200/XA-32. UK's first central computer dedicated to supercomputing. Shared memory architecture. In the November, 1995, Top500 list, it ranked #303 with R/peak ratings of 3.72/7.68 gigaflops. In June, 1996 it was #409.
(828 MFlop) IBM 3090/600J mainframe, 6 CPU + 6 vector units, 512 MB main memory, 512 extended memory, additional disk. Shared scientific and administrative computing.
(348 MFlop) IBM 3090/300E mainframe, 3 CPUs + 3 vector units, 64MB main memory, 64MB extended memory, 23GB disk. Shared scientific and administrative computing.
1 Initial rank in Top500
2 Peak TFlops