h7mumuj:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
h7mumuj:start [2024/10/29 10:48] asiodmokh7mumuj:start [2024/10/30 10:34] (current) – [Install Herwig7] asiodmok
Line 1: Line 1:
 ====== Cyfronet ====== ====== Cyfronet ======
  
-== Applying for grant and access to Cluster ==+==== Applying for grant and access to Cluster ====
  
 +  - Sign up at https://portal.plgrid.pl/
 +  - Apply for grant and activate it at https://portal.plgrid.pl/services
 +       * **Note:** Access to the Ares cluster at the Cyfronet centre [grant required]
  
 +==== Login to Ares ====
  
-Sign up at https://portal.plgrid.pl+''$ ssh <login>@ares.cyfronet.pl''
-Apply for grant and activate it at https://portal.plgrid.pl/services +
-Access to the Ares cluster at the Cyfronet centre [grant required]+
  
-Login to cluster+<login> = username starting with "plg"
  
-ssh <login>@ares.cyfronet.pl +''Add infor about ssh key'' 
-<login> plg—---+==== Storages ====
  
-Install libraries/packages using module+  * $HOME 
 +  * $SCRATCH 
 +  * $PLG_GROUPS_STORAGE (starts with “plgg”)
  
-module avail : lists available libraries/packages +''Add how to share files ''  
-module list : lists loaded libraries/packages +==== Install libraries/packages using module ==== 
-module load <package> : loads specified package +<code>module avail</code> Lists all available libraries/packages 
-module spider <keyword> : searches available packages with the keyword +<code>module list</code> Lists loaded libraries/packages 
-module purge : purges the environment+<code>module load <package></code> Loads specified package 
 +<code>module spider <keyword></code> Searches available packages with the keyword 
 +<code>module purge</code> Purges the environment
  
-System utilities +==== System utilities ==== 
-hpc-grants : shows available grants, resource allocations, consumed resourced +<code>hpc-grants</code> Shows available grants, resource allocations, consumed resourced 
-hpc-fs : shows available storage +<code>hpc-fs</code> Shows available storage 
-hpc-jobs : shows currently pending/running jobs +<code>hpc-jobs</code> Shows currently pending/running jobs 
-hpc-jobs-history : shows information about past jobs+<code>hpc-jobs-history</code> Shows information about past jobs
  
-Storages+==== Install Herwig7 ====
  
-$HOME +  - Load necessary modules: <code> 
-$SCRATCH +    module load python/2.7.18-gcccore-11.2.0 
-$PLG_GROUPS_STORAGE (starts with “plgg”)+    module load libtool 
 +    module load emacs 
 +    module load gcc/11.2.0 
 +    module load gcccore/11.2.0 
 +    module load cmake/3.22.1-gcccore-11.2.0 
 +    module load gsl/2.7-gcc-11.2.0 
 +     </code> 
 +   - Move to $SCRATCH or $PLG_GROUPS_STORAGE (preferred): <code> $ cd $PLG_GROUPS_STORAGE/<groupname> </code>  
 +   - Download bootstrap script and make it executable: <code> $ wget https://herwig.hepforge.org/downloads/herwig-bootstrap 
 + $ chmod +x herwig-bootstrap </code> 
 +   - Start an interactive node session (Easier to debug if installation fails): <code> $ srun -p plgrid-now -t 5:00:00 --mem=5GB -A <grantname-cpu> --pty /bin/bash </code> **Note**: Only one interactive plgrid-now session can be run at a time. Adjust the session duration and memory allocation as required. The above specified parameters should suffice in general. 
 +   - Run bootstrap and exit session when completed: <code>[ares][username@ac0787 ~]$ ./herwig-bootstrap -j4 $PWD/Herwig 
 +[ares][username@ac0787 ~]$ exit </code> **Note**: The login01 node changes to ac0xxx when in a computing node. 
 +   - More options for running bootstrap can be found with <code> $ ./herwig-bootstrap --help </code> 
 +   - Activate Herwig environment : <code> $ source Herwig/bin/activate </code>
  
-Install Herwig7+==== A sample run script (and common slurm commands) ==== 
 +Run script (run.sh) : 
  
-Load necessary modules -  +       #!/bin/bash 
-module load python/2.7.18-gcccore-11.2.0 +       #SBATCH --job-name=Sample_Run 
-module load libtool +       #SBATCH --nodes=1 
-module load emacs +       #SBATCH --ntasks-per-node=1 
-module load gcc/11.2.0 +       #SBATCH --cpus-per-task=1 
-module load gcccore/11.2.0 +       #SBATCH --mem-per-cpu=1GB 
-module load cmake/3.22.1-gcccore-11.2.0 +       # set wall time to 1 day 
-Module load gsl/2.7-gcc-11.2.0 +       #SBATCH --time=1-00:00:00 
-Download bootstrap script from https://herwig.hepforge.org/downloads.html +       ## Name of partition 
-Move to $SCRATCH or $PLG_GROUPS_STORAGE (preferred) and +       #SBATCH -p plgrid 
-$./herwig-bootstrap -j4 $PWD/Herwig +       cd <Run directory> 
-Activate Herwig environment : source Herwig/bin/activate+       source <path/to/>Herwig/bin/activate 
 +       export HW=Herwig 
 +       ${HW} run *.run --seed=$seed -N 1000000 > /dev/null & 
 +       HWPID="$!" 
 +       wait "${HWPID}"
  
-A sample run script (and common slurm commands) 
  
-run.sh :  +==== Run commands from CLI ==== 
- #!/bin/bash+  - Submit parallel jobs: <code>$ sbatch --array=0-5 -A <grantname-suffix> run.sh</code>  
 +      * Suffix = cpu, cpu-bigmem, gpu 
 +  - ''sprio''show status of pending jobs 
 +  - ''sshare'': show usage of available grants 
 +  - Run interactive bash session: <code>$ srun -p plgrid-now -t 1:00:00 --mem=1GB -A <grantname-cpu> --pty /bin/bash</code>
  
-#SBATCH --job-name=Sample_Run +==== More Links ==== 
- +   - Detailed information about Ares - [[https://docs.cyfronet.pl/display/~plgpawlik/Ares#Ares-AccesstoAres]] 
-#SBATCH --nodes=+   - Slurm documentation - [[https://slurm.schedmd.com/quickstart.html]]
-#SBATCH --ntasks-per-node=+
-#SBATCH --cpus-per-task=+
-#SBATCH --mem-per-cpu=1GB +
- +
-# set wall time to 1 day +
-#SBATCH --time=1-00:00:00 +
- +
-## Name of partition +
-##SBATCH -p plgrid +
- +
-sleep $((RANDOM % 30)) +
- +
-cd <Run directory> +
- +
-source <path/to/>Herwig/bin/activate +
- +
-export HW=Herwig +
- +
-${HW} run *.run --seed=$seed -N 1000000 > /dev/null & +
- +
-HWPID="$!" +
-wait "${HWPID}" +
- +
-Run commands from CLI  +
-Submit parallel jobs: sbatch --array=0-5 -A <grantname-suffix> run.sh  +
-Suffix = cpu, cpu-bigmem, gpu +
-sprio: show status of pending jobs +
-sshare : show usage of available grants +
- +
-Run interactive bash session -  +
-srun -p plgrid-now -t 1:00:00 --mem=1GB -A <grantname-cpu> --pty /bin/bash +
- +
- +
- +
-Detailed information about Ares - https://docs.cyfronet.pl/display/~plgpawlik/Ares#Ares-AccesstoAres +
- +
-Slurm documentation - https://slurm.schedmd.com/quickstart.html+
  
  • h7mumuj/start.1730198893.txt.gz
  • Last modified: 11 months ago
  • by asiodmok