Resources

Production-ready templates and curated community tools.

Copy-paste ready scripts and configuration examples, plus handpicked GitHub repositories from the SLURM ecosystem to accelerate your HPC workflows.

5
Premium Templates
6
Community Tools
1,975
GitHub Stars
Open
Source

Premium Templates

In-App
Job ScriptsBeginnertemplate

Basic sbatch Template

A starter template for submitting batch jobs with common resource requests and output handling.

bash
#!/bin/bash
#SBATCH --job-name=my_job
#SBATCH --output=output_%j.log
#SBATCH --error=error_%j.log
#SBATCH --time=01:00:00
#SBATCH --partition=compute
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=16G

# Load required modules
module load python/3.11

# Run your application
python train.py --epochs 100
Job ScriptsIntermediatetemplate

GPU Job Script

Template for GPU-accelerated jobs with CUDA environment setup and multi-GPU support.

bash
#!/bin/bash
#SBATCH --job-name=gpu_training
#SBATCH --output=gpu_%j.log
#SBATCH --partition=gpu
#SBATCH --gres=gpu:a100:2
#SBATCH --time=24:00:00
#SBATCH --mem=64G

# GPU setup
export CUDA_VISIBLE_DEVICES=0,1

# Load CUDA modules
module load cuda/12.1
module load cudnn/8.9

# Run GPU training
python train.py --gpus 2
MonitoringIntermediatescript

GPU Utilization Monitor

Real-time GPU monitoring script that logs utilization, memory usage, and temperature.

python
#!/usr/bin/env python3
import subprocess
import time
import datetime

def get_gpu_stats():
    result = subprocess.run([
        'nvidia-smi',
        '--query-gpu=index,utilization.gpu,memory.used,temperature.gpu',
        '--format=csv,noheader,nounits'
    ], capture_output=True, text=True)
    
    for line in result.stdout.strip().split('\n'):
        idx, util, mem, temp = line.split(',')
        print(f"[{datetime.datetime.now()}] GPU{idx}: {util}% util, {mem}MB mem, {temp}°C")

while True:
    get_gpu_stats()
    time.sleep(10)
Job ScriptsIntermediatetemplate

Job Array Template

Submit multiple similar jobs with a single script using SLURM job arrays.

bash
#!/bin/bash
#SBATCH --job-name=array_job
#SBATCH --output=output_%A_%a.log
#SBATCH --array=1-100
#SBATCH --time=00:30:00
#SBATCH --mem=4G

# Process task ID
echo "Processing task $SLURM_ARRAY_TASK_ID"

# Run with task-specific parameter
python process.py --id $SLURM_ARRAY_TASK_ID
ConfigurationAdvancedconfig

Partition Configuration Example

Sample slurm.conf partition definitions for CPU, GPU, and high-memory nodes.

conf
# CPU Partition
PartitionName=compute Nodes=node[01-20] Default=YES MaxTime=7-00:00:00 State=UP

# GPU Partition
PartitionName=gpu Nodes=gpu[01-08] MaxTime=3-00:00:00 State=UP

# High Memory Partition
PartitionName=highmem Nodes=mem[01-04] MaxTime=2-00:00:00 State=UP

# QOS Configuration
QOS=normal Priority=50 GrpTRES=cpu=1000
QOS=high Priority=100 GrpTRES=cpu=200

Want more resources?

Join early access to request new templates, suggest community resources, and get notified when we expand the library with advanced configurations and integrations.