Skip to content

HPC Orientation for Biologists

Prerequisites

  • RCAC cluster account (apply here)
  • Computer with internent connection
  • Programs: A web browser (for Open OnDemand and Globus), Terminal/Putty/PowerShell

What you will learn

  • Connect to any RCAC cluster via OOD, SSH, or ThinLinc
  • Find and load bioinformatics software
  • Submit interactive and batch jobs with SLURM
  • Understand storage tiers and monitor usage
  • Transfer data with Globus and rsync

This guide covers the day-one essentials for biologists using RCAC clusters. It uses Gautschi as the example cluster throughout, but the same patterns apply to Negishi, Bell, and Gilbreth. Just swap the cluster name in URLs and paths.

By the end, you will be able to log in, load software, submit a job, check your storage, and move data to and from the cluster.

An HPC cluster is not a single computer. It is a collection of networked machines that work together, managed by a job scheduler. Here is the typical workflow:

How an HPC cluster works: you log in to a head node, submit jobs through the SLURM scheduler, and your work runs on compute nodes that share a common filesystem
  1. You log in to a frontend (head/login) node via the internet.
  2. You write a job script describing the resources you need and submit it.
  3. SLURM, the scheduler, places your job in a queue and waits for resources.
  4. When resources are available, SLURM runs your job on one or more compute nodes.
  5. All nodes read and write to a shared filesystem ($HOME, $RCAC_SCRATCH), so your data is accessible everywhere.

There are three ways to connect. All three land you on the same login nodes with access to the same filesystems and software.

MethodBest forRequires installation?
Open OnDemand (OOD)Browser-based access, file browsing, interactive appsNo
SSHCommand-line power users, scripting, automationNo (built into macOS/Linux); PuTTY or WSL on Windows
ThinLincFull Linux desktop, GUI tools (IGV, RStudio, CellProfiler)Yes (ThinLinc client)

Pick the method that fits your workflow and follow the instructions below:

Recommended starting point. Open OnDemand provides a web portal with a file browser, terminal, job submission forms, and interactive apps like JupyterLab and RStudio. No software to install.

  1. Navigate to the OOD portal

    Go to https://gateway.<cluster>.rcac.purdue.edu and log in with your Purdue (BoilerKey) credentials.

    ClusterOOD URL
    Gautschigateway.gautschi.rcac.purdue.edu
    Negishigateway.negishi.rcac.purdue.edu
    Bellgateway.bell.rcac.purdue.edu
    Gilbrethgateway.gilbreth.rcac.purdue.edu
  2. Tour the dashboard

    After login you will see the OOD dashboard with these key sections:

    • Files: Browse, upload, download, and edit files on the cluster
    • Jobs: View active jobs or use the Job Composer to build submission scripts
    • Clusters: Open a shell terminal (equivalent to SSH) directly in the browser
    • Interactive Apps: Launch JupyterLab, RStudio Server, and other GUI applications
  3. Open a terminal

    Click Clusters in the top menu and select the cluster shell access (e.g., Gautschi Shell Access). A terminal opens in your browser. You are now on a login node.

RCAC deploys bioinformatics tools as BioContainers, pre-built Apptainer containers accessed through the Lmod module system. You do not need to install most tools yourself.

The critical first step: purge before loading

Section titled “The critical first step: purge before loading”

Always start with module --force purge before loading any modules. This removes all previously loaded modules, including sticky system modules that can cause conflicts with containerized tools.

module --force purge
module load biocontainers

The biocontainers module unlocks all bioinformatics software. You must load it before any bioinformatics tool module becomes visible.

# Search for a specific tool
module spider samtools
# Get loading instructions for a specific version
module spider samtools/1.21
# List all available biocontainer modules
module --force purge
module load biocontainers
module avail

module spider searches all modules, including those not yet visible. It shows available versions and prerequisites.

module --force purge
module load biocontainers samtools/1.21
samtools --version

After loading, run the tool as usual. Behind the scenes, RCAC creates shell function wrappers that transparently route your command through the container. You do not need to interact with Apptainer/Singularity directly.

CommandPurpose
module --force purgeClean your environment (always do this first)
module load biocontainersEnable access to bioinformatics tools
module spider <tool>Search for a tool and its available versions
module spider <tool>/<version>Show loading instructions for a specific version
module availList all currently loadable modules
module listShow what is currently loaded
module load biocontainers <tool>/<version>Load a specific tool
module show <tool>/<version>See what environment variables a module sets

When you load a biocontainer module, RCAC creates a shell function that wraps the tool command in an apptainer run call. For example, loading bwa creates a function so that typing bwa actually runs:

singularity run /apps/biocontainers/images/<container>.sif bwa "$@"

For most use cases this is invisible. If you need more detail, see Understanding the wrapper in the Running Bioinformatics guide.

If a tool is not available via module spider after loading biocontainers, you can request it:

  1. Email rcac-help@purdue.edu with the subject line including “genomics”
  2. Include the tool name, version, and a link to the tool (e.g., its Bioconda or BioContainers page)
  3. RCAC will build and deploy the container, typically within a few business days

While waiting, you can install the tool yourself using Conda or by pulling a custom container. See How Do I Find and Run Software X? for the full decision tree.

SLURM (Simple Linux Utility for Resource Management) is the job scheduler on all RCAC clusters. It manages the queue so that all users get fair access to compute resources.

The mental model: you describe what resources your job needs (CPUs, memory, time), submit it to the queue, and SLURM runs it on a compute node when resources are available.

Nodes, cores, and memory: what are you requesting?

Section titled “Nodes, cores, and memory: what are you requesting?”

Before writing your first job script, it helps to understand what a compute node actually contains. Each SLURM directive maps to a physical component inside the node.

Anatomy of a compute node: processors contain multiple cores, and the node provides shared memory, local storage, and a network interface to the cluster filesystem.
  • Node: A single physical server in the cluster. Your job runs on the resources within this machine. Most bioinformatics jobs need only one node (--nodes=1).
  • Processors and cores: A core is the basic processing unit. Multiple cores are grouped into a processor (CPU). When a tool asks for “threads,” you are requesting cores (--cpus-per-task).
  • Memory (RAM): The node’s short-term working space, shared by all cores. Request what your tool needs with --mem (e.g., --mem=32G).
  • Local storage: Fast, temporary disk on the node ($TMPDIR) for scratch files during a job. It is deleted when the job ends.
  • Network: Connects the node to the shared filesystems ($HOME, $RCAC_SCRATCH) where your data lives.

Interactive jobs give you a shell on a compute node for testing, debugging, and exploratory work. Use them when you need to try commands before writing a batch script.

sinteractive -A <account-name> -n 4 -N 1 -t 1:00:00
FlagMeaning
-A <account-name>Your allocation/account (check with slist)
-n 4Number of CPU cores
-N 1Number of nodes (almost always 1)
-t 1:00:00Wall time (hours:minutes:seconds)

Once the session starts, you are on a compute node and can load modules, run tools, and test your workflow. Type exit when done to release the resources.

Interactive vs. batch: two ways to run jobs

Section titled “Interactive vs. batch: two ways to run jobs”
Interactive jobs give you a live shell on a compute node via sinteractive, while batch jobs submit tasks via sbatch and return results when complete.

SLURM offers two ways to run work on compute nodes. Interactive (sinteractive) gives you a live terminal session for testing and exploration. Batch (sbatch) submits a script that runs unattended, and you collect results when it finishes. Use interactive mode to develop your workflow, then switch to batch for production runs.

For real workloads, write a batch script and submit it with sbatch. The job runs unattended on a compute node; you collect results when it finishes.

A SLURM batch script has three parts:

  1. Shebang: #!/bin/bash
  2. #SBATCH directives: resource requests (parsed by SLURM, not executed by bash)
  3. Your commands: module loads, tool invocations, file operations
slurm_cpu.sh
#!/bin/bash
#SBATCH --job-name=my_analysis
#SBATCH --account=<account-name>
#SBATCH --partition=<partition-name>
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --time=04:00:00
#SBATCH --mem=32G
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err
module --force purge
module load biocontainers fastqc/0.12.1
fastqc --outdir results/ --threads ${SLURM_CPUS_ON_NODE} *.fastq.gz

Submit and monitor:

# Submit the job
sbatch slurm_cpu.sh
# Check your jobs
squeue -u ${USER}
# Cancel a job
scancel <jobid>
# Check resource usage after completion
sacct -j <jobid> --format=JobID,JobName,MaxRSS,Elapsed,State,ExitCode
# Detailed job report (walltime, memory, CPU efficiency)
jobinfo <jobid>
#SBATCH directiveDescriptionTypical value
--accountAllocation to chargeCheck with slist
--partitionQueue/partitionCluster-specific (e.g., cpu, gpu)
--nodesNumber of nodes1 (almost always for bioinformatics)
--ntasksNumber of processes1 for single tools
--cpus-per-taskThreads per processMatch tool’s thread flag (4 to 32)
--timeWall clock limitStart generous, tighten after sacct
--memTotal memoryCheck tool docs; start with 16 to 32G
--gpus-per-nodeGPUs per node (GPU jobs only)1 for most tools
--outputstdout log file%x_%j.out (job name + job ID)
--errorstderr log file%x_%j.err
CommandPurpose
slistShow your accounts, available partitions, and resource limits
sfeaturesShow available hardware features (CPU types, GPUs, memory per node)
squeue -u ${USER}Check status of your jobs
scontrol show job <jobid>Detailed info about a specific job
sacct -j <jobid>Job history, resource usage, exit status
jobinfo <jobid>Friendly summary: walltime, memory, CPU efficiency, disk I/O
scancel <jobid>Cancel a running or queued job
sinteractiveLaunch an interactive session

For the full SLURM guide including array jobs, Conda in SLURM, debugging failed jobs, and common pitfalls, see Submitting SLURM Jobs in the Running Bioinformatics guide.

RCAC provides multiple storage tiers. Understanding when to use each one prevents quota issues and data loss.

StoragePathCapacityPersistenceBest for
Home$HOME~25 GBPermanent, nightly snapshotsScripts, configs, small critical files
Scratch$RCAC_SCRATCHVery large (100+ TB shared)Purged after 60 days of inactivityActive analysis, intermediate files, job outputs
Depot/depot/<group>/PI-purchased (1 TB increments)Permanent, backed up, no purgeShared lab data, raw sequences, final results
Node-local$TMPDIRVaries by nodeDeleted when job endsFast temporary files within a single job
  • Raw sequencing data: Depot (permanent, backed up). Symlink into your Scratch project directory.
  • Active analysis outputs: Scratch (fast, large capacity). Move final results to Depot when done.
  • Scripts, configs, READMEs: Home or Depot. Small and irreplaceable.
  • Temporary per-job files: $TMPDIR on the compute node. Fastest I/O, automatically cleaned.

These commands help you check storage consumption and available compute resources. Run them on a login node.

CommandWhat it showsWhen to use
myquotaDisk usage and limits for Home, Scratch, and DepotBefore starting a large analysis; regularly to avoid quota surprises
userinfo ${USER}Your accounts, quotas, group memberships, and active sessions in one viewQuick overview of your entire cluster profile
slistYour accounts, available partitions, and resource limitsTo find your --account name for #SBATCH directives
sfeaturesNode hardware: CPU types, core counts, memory, GPUsTo right-size --cpus-per-task, --mem, and --gres requests
showpartitionsPartition time limits, node counts, and access policiesTo choose the right --partition for your job
purgelistFiles on Scratch scheduled for purgeTo check if any of your files are about to be deleted

For a detailed guide on project directory structure, naming conventions, and archiving, see Project Organization for Bioinformatics on HPC.

Moving data to and from the cluster is one of the first things you will need to do. Use the right tool for the job size.

MethodBest forKey advantage
GlobusLarge datasets (GBs to TBs)Auto-resume, integrity verification, fire-and-forget
rsyncDirectory sync, incremental backupsOnly transfers changed files
scpQuick single-file transfersSimple, no setup needed
OOD File BrowserSmall files, drag-and-dropNo command line needed

Globus is a managed file transfer service designed for research data. It handles large transfers reliably with automatic retry, checksum verification, and the ability to close your laptop while the transfer runs.

  1. Access the Globus transfer portal

    Go to transfer.rcac.purdue.edu and sign in with your Purdue (BoilerKey) credentials.

  2. Find the cluster endpoint

    In the Collection search field, type the cluster name (e.g., “Gautschi”). The RCAC endpoint will appear.

    Select it and enter the path you want to access. For example:

    • Scratch: /scratch/gautschi/<username>/
    • Depot: /depot/<group>/
  3. Set up the other side

    In the second panel, search for your source or destination:

    • Another RCAC cluster: Search by cluster name (e.g., “Negishi”)
    • A collaborator’s institution: Search for their Globus endpoint
    • Your local computer: Install Globus Connect Personal (GCP), create a personal endpoint, and it will appear when you search
  4. Start the transfer

    Select files or directories on each side and click Start. Globus handles the rest. You will receive an email notification when the transfer completes.

rsync is a command-line tool that efficiently synchronizes files and directories. It only transfers files that have changed, making it ideal for keeping directories in sync and resuming interrupted transfers.

# From your local computer
rsync -avzP /local/path/to/data/ <boilerid>@gautschi.rcac.purdue.edu:/scratch/gautschi/<username>/project/data/
FlagMeaning
-aArchive mode (preserves permissions, timestamps, symlinks)
-vVerbose output
-zCompress data during transfer
-PShow progress and enable resume of partial transfers

For copying a single file or a small number of files:

scp myfile.txt <boilerid>@gautschi.rcac.purdue.edu:/scratch/gautschi/<username>/

For small files, the OOD file browser provides drag-and-drop upload and download directly in the browser. Navigate to Files in the OOD dashboard, browse to your target directory, and use the Upload/Download buttons.

For specialized use cases:

Now that you can connect, run software, submit jobs, and transfer data, explore these topics to level up:

RCAC HPC Exchange

Knowledgebase, tips, and training for common HPC tasks on RCAC clusters. Browse the exchange →

Running Bioinformatics on RCAC

Deep dive into biocontainers, Conda environments, custom containers, array jobs, and debugging failed jobs. Read the guide →

Productivity Toolkit

SSH keys, SSH config shortcuts, and shell customization for faster daily workflows. Read the guide →