HPC FAQ - GPUs

Supercomputer documentation is always a work in progress! Please email questions, corrections, or suggestions to the HPC support team at help-hpc@uky.edu as usual. Thanks!

Please don't run non-GPU code on the GPU nodes!

There are four GPU enabled nodes on the DLX supercomputing cluster. The nodes are identical to the basic compute nodes (12 cores with 36 GB of RAM), except that each node has four Nvidia M2070 GPUs attached. GPU enabled code often runs many times faster than on a CPU.

The limit on the GPU queue is one day (24 hours).

If you do not put a time limit on jobs submitted to the GPU queue they will wait in the queue forever!
Add #SBATCH -t 24:00:00 to your batch job script before submitting it.

Frequently Asked Questions

1. How do I run a job in the GPU queue?

To run a job with GPU enabled code put the SBATCH option into your job script:

#SBATCH --partition=GPU

Or add the partition flag to the sbatch command.

sbatch -n12 -pGPU aaa.sh

One or the other is enough, you don't need to do both.

2. How do I use Amber with GPUs?

Only the PMEMD module in Amber 11 is GPU enabled, but the Amber sample jobs that CCS tested ran much faster when using GPUs.

See the page Amber on GPUs for information on running Amber on the GPU nodes.

3. How do I use NAMD with GPUs?

This information will be coming soon.

4. Can I write my own GPU code?

If you are interested in GPU enabling your own code, then see the extensive Nvidia GPU developer info on the Nvidea web page http://developer.nvidia.com/gpu-computing-sdk.

Note that the "SDK" is a misnomer; this is mostly sample code. The Toolkit is the development environment, which you establish by loading the CUDA module (module load cuda).

859-218-HELP (859-218-4357) 218help@uky.edu

Text Only Options

Top of page


Text Only Options

Open the original version of this page.

Usablenet Assistive is a UsableNet product. Usablenet Assistive Main Page.