HPC README FIRST 2013-01-06

This page is obsolete

For current information and updates on the cluster, see the HPC News Blog.

Accounts

  • Login to the DLX cluster using ssh to dlx.uky.edu using your Link-Blue userid and password.
  • If you don't have a Link-Blue account, contact the IT Customer Service Center using the information at the bottom of this page.
  • If you have not changed your default Link-Blue password, please change it immediately. You can use the UK Account Manager to do this.
  • After you authenticate the first time, you may set up an ssh public key to authenticate thereafter, or you may continue to give your Link-Blue passwd each time you logon.
  • Please use a .forward file to forward email from the login node to an email address that you read regularly. This may be the only way we can contact you about problems with your jobs, the file system, and other issues.

General information

  • To see important system related notices when logged onto the cluster, use the sysstatus command.
  • For current information on the installation of the new cluster, go to the New Cluster Announcement page.
  • The DLX cluster basic nodes have 12 cores and 36 GB of memory each. If your memory requirements for non-distributed code exceed that, you can run on one of the Hi-Mem nodes. The Hi-Mem nodes have 32 cores and 512 GB each.
  • The default disk quotas for the home directory are 700 GB (soft) and 800 GB (hard) and are subject to change. To display your HOME disk quota run the quota command.
  • There are no quotas on your scratch directory (/scratch/userid), but remember the scratch areas are not backed up.
  • Please use disk space wisely, both HOME and scratch. The HOME dirs are currently over-subscribed in terms of quota. Likewise, everyone on the cluster shares the scratch space.
  • The emacs and vim (vi) editors are available. Enter vimtutor to bring up a short tutorial providing a brief introduction to vim. Here is the vim User Manual.

Compiling

  • To compile Fortran 77 or 90 programs with the Intel compiler, use the ifort command.
  • To compile c or c++ programs with the Intel compiler, use the icc command.
  • To compile with the OpenMPI libraries, use mpif90 or mpicc instead.
  • For more information on the Intel Compilers, see the Intel Compiler documentation.
  • For information on the GNU compilers, see the GNU Compiler documentation.
  • To link with the IMKL, use -L$(INTEL_MKL_LIBS) and -lmkl in your makefile.
  • Optimized versions of BLAS, LAPACK, BLACS and SCALAPACK are available through the Intel MKL libraries (IMKL). See the Intel Library documentation for details.

Batch jobs

  • The batch scheduler is Moab+SLURM.
  • Sample jobs for the cluster are in the /share/cluster/examples directory.
  • To run Gaussian jobs, use the batchg09 command.
  • Use sbatch script to submit a job.
  • The srun command is no longer supported. Instead of srun -b, use sbatch.
  • Use squeue to check the job queues. For example: squeue -u userid
  • Use scancel job_id to terminate a batch job.
  • Per-user compute node quotas are:
    16 nodes when other eligible jobs are waiting.
    32 nodes when no other eligible jobs are waiting.
    Node quotas are subject to change and group node quotas may also apply.

859-218-HELP (859-218-4357) 218help@uky.edu

Text Only Options

Top of page


Text Only Options

Open the original version of this page.

Usablenet Assistive is a UsableNet product. Usablenet Assistive Main Page.