Table of Contents

Cyfronet

Applying for grant and access to Cluster

  1. Apply for grant and activate it at https://portal.plgrid.pl/services
    • Note: Access to the Ares cluster at the Cyfronet centre [grant required]

Login to Ares

$ ssh <login>@ares.cyfronet.pl

<login> = username starting with “plg”

Add infor about ssh key

Storages

Add how to share files

Install libraries/packages using module

$ module avail

Lists all available libraries/packages

$ module list

Lists loaded libraries/packages

$ module load <package>

Loads specified package

$ module spider <keyword>

Searches available packages with the keyword

$ module purge

Purges the environment

System utilities

$ hpc-grants

Shows available grants, resource allocations, consumed resourced

$ hpc-fs

Shows available storage

$ hpc-jobs

Shows currently pending/running jobs

$ hpc-jobs-history

: Shows information about past jobs

Install Herwig7

  1. Load necessary modules:
        module load python/2.7.18-gcccore-11.2.0
        module load libtool
        module load emacs
        module load gcc/11.2.0
        module load gcccore/11.2.0
        module load cmake/3.22.1-gcccore-11.2.0
        module load gsl/2.7-gcc-11.2.0
         
  2. Move to $SCRATCH or $PLG_GROUPS_STORAGE (preferred):
     $ cd $PLG_GROUPS_STORAGE/<groupname> 
  3. Download bootstrap script and make it executable:
     $ wget https://herwig.hepforge.org/downloads/herwig-bootstrap
     $ chmod +x herwig-bootstrap 
  4. Start an interactive node session (Easier to debug if installation fails):
     $ srun -p plgrid-now -t 5:00:00 --mem=5GB -A <grantname-cpu> --pty /bin/bash 

    Note: Only one interactive plgrid-now session can be run at a time. Adjust the session duration and memory allocation as required. The above specified parameters should suffice in general.

  5. Run bootstrap and exit session when completed:
    [ares][username@ac0787 ~]$ ./herwig-bootstrap -j4 $PWD/Herwig
    [ares][username@ac0787 ~]$ exit 

    Note: The login01 node changes to ac0xxx when in a computing node.

  6. More options for running bootstrap can be found with
     $ ./herwig-bootstrap --help 
  7. Activate Herwig environment :
     $ source Herwig/bin/activate 

A sample run script (and common slurm commands)

Run script (run.sh) :

     #!/bin/bash
     #SBATCH --job-name=Sample_Run
     #SBATCH --nodes=1
     #SBATCH --ntasks-per-node=1
     #SBATCH --cpus-per-task=1
     #SBATCH --mem-per-cpu=1GB
     # set wall time to 1 day
     #SBATCH --time=1-00:00:00
     ## Name of partition
     #SBATCH -p plgrid
     cd <Run directory>
     source <path/to/>Herwig/bin/activate
     export HW=Herwig
     ${HW} run *.run --seed=$seed -N 1000000 > /dev/null &
     HWPID="$!"
     wait "${HWPID}"

Run commands from CLI

  1. Submit parallel jobs:
    $ sbatch --array=0-5 -A <grantname-suffix> run.sh
    • Suffix = cpu, cpu-bigmem, gpu
  2. sprio: show status of pending jobs
  3. sshare: show usage of available grants
  4. Run interactive bash session:
    $ srun -p plgrid-now -t 1:00:00 --mem=1GB -A <grantname-cpu> --pty /bin/bash