Salisbury University students on campus

Using the HPCL Cluster

The primary way of interfacing with the cluster is secure shell (SSH) via the login server ssh://hslinux.salisbury.edu. Users on Windows should use the  PuTTY tool to SSH into the login server, whereas users on Linux should use the ssh terminal command. This login server is only accessible on the Henson LAN, and is restricted to Students and Faculty granted Linux access with Henson Linux accounts. The Henson HPC cluster uses the SchedMD Slurm program to manage resources and execute programs across the cluster. It is recommended to run programs that are  Open MPI aware to perform computation across the system, using the sbatch or srun utility supplied by Slurm. Faculty and students (with faculty permission) may request access to both the Linux system and the Slurm Scheduler by submitting a ticket.

MPI Configuration

Both OpenMPI v3 and MPICH v3 are supported through environment modules. By running the module avail command, you will get the following output listing all available environment modules:

---------------------------------------------------- /usr/share/Modules/modulefiles -----------------------------------------------------

dot module-git module-info modules null use.own

----------------------------------------------------------- /etc/modulefiles ------------------------------------------------------------

mpi/mpich-3.0-x86_64 mpi/mpich-3.2-x86_64 mpi/mpich-x86_64 mpi/openmpi3-x86_64 mpi/openmpi-x86_64

To load a module, the command is module load modulename, for example to load OpenMPI into your environment run module load mpi/openmpi3-x86_64. More info on the module command is available here. A comprehensive tutorial and book recommendation list on using MPI is available courtesy of Wes Kendall

Python users can install the  mpi4py  package by loading the module corresponding to the desired MPI implementation and then running  pip2 install mpi4py --user  for Python 2.7 or  pip3 install mpi4py --user  for Python 3.6.

Cluster Scheduling

An example SLURM batch scri

#!/bin/bash
#SBATCH --job-name=myproject
#SBATCH --ntasks=30
#SBATCH --mem=2gb
#SBATCH --time=00:05:00
# Time limit in the form hh:mm:ss
#SBATCH --output=out/%j.log

module load mpi/openmpi3-x86_64

srun python3 ~/Projects/myproject/myproject.py

This script is designed to run an MPI program written with mpi4py. It has a maximum runtime of 5 minutes, a name of "myproject," a maximum memory usage node of 2GB, and outputs a log file to the specified out directory of the project directory. The  ntasks  parameter specifies that the world size for the MPI task will be 30, however the scheduler is free to choose which CPU cores that the task maps to.

This script runs a Python 3 MPI program located in the   Projects/myproject/myproject.py  subdirectory of the User's home directory. The   --ntasks=30  directive specifies that we want 30 instances of our programming without regard to how those tasks are distributed.

By changing the   ntasks  directive to the following, SLURM will allocate 12 MPI ranks (which are 1:1 mapped to CPU cores except in the case of oversubscription) each to 4 compute nodes, for a total of 48 cores.

#SBATCH --nodes=4
#SBATCH --ntasks-per-node=12

The HPC cluster can also be used to run multithreaded applications that are not cluster aware. Allocations for standard multithreaded applications should use the --cpus-per-task=N and the --ntasks flags to specify to the scheduler that each executable (task) will use N cores.