Lab Infrastructure

Resources

An overview of the lab's core infrastructure, including computing systems and experimental platforms that support computational oncology and translational cancer research.

Compute

Computing servers

An integrated HPC environment for large-scale data analysis, multimodal model training, shared storage, and coordinated job execution across the lab.

CPU capacity 716+ cores

GPU capacity 62 GPUs

Memory 9+ TB RAM

Storage 727.6 TB

Accelerated HPC

GPU computing nodes

Active

A multi-generation GPU cluster for deep learning, pathology foundation models, spatial biology pipelines, and multimodal AI development.

62 total GPUs ~5.4 TB RAM H100 SXM × 8 RTX A6000 × 8 RTX A5000 × 38 RTX 2080 Ti × 8
The GPU tier combines shared compute nodes and dedicated servers, making it suitable for both large scheduled jobs and project-specific development.

General HPC

CPU computing nodes

Active

Shared CPU nodes support preprocessing, statistical analysis, classical pipelines, simulation-heavy workflows, and cluster-wide orchestration.

716+ total CPU cores ~3.7 TB RAM Login node
This layer provides the backbone for everyday analysis, queue management, environment setup, and long-running non-GPU workloads.

Data Infrastructure

Storage and networking

Active

The cluster is supported by large-capacity HDD and NVMe SSD storage, together with high-speed switching for data movement across the HPC environment.

574 TB HDD 153.6 TB NVMe SSD 200G InfiniBand 10G switch fabric
Storage and interconnect resources are organized to support large imaging datasets, shared archives, and high-throughput model training workflows.

Operations

NEXGEM HPC platform

In operation

The lab also maintains its own HPC software environment for monitoring, job submission, scheduling, and resource allocation across the integrated cluster.

Job submission Monitoring Scheduling Resource allocation
This software layer helps unify compute access across GPU, CPU, and storage resources so the infrastructure operates as one coordinated HPC system.

More to come

Resource details will continue to expand

This page is designed to grow into a fuller infrastructure catalog with equipment specifications, access notes, and usage context.

Ask about resources