Research Platforms
Welcome to the University of Leeds’ Research Computing Resource Guide! IT Services provide a number of tools and services for both data storage and compute, Research Computing being the primary point of contact for research users.
Finding the Right Computing Resources for Your Research
Whether you’re working on small-scale analysis, developing complex simulations, or running intensive computations, we have a wide range of computing resources tailored to support your research.
This guide will help you navigate through various options, from individual devices to advanced High-Performance Computing (HPC) systems. To find the most appropriate resource, consider these questions:
- Are you looking for something you can use immediately on your personal device?
- Do you need more power without the complexity of HPC systems?
- Are you running large-scale computations that require multi-node parallel processing?
Use our Resource Roadmap below to discover the best solution for your research needs.
Resource Roadmap
- Starting Small: Individual Device
- If you’re just getting started or your computations are lightweight, you can easily begin on your laptop or desktop.
- Ideal for initial code development, small data analysis, and non-intensive simulations.
- Need More Power? Try Shared Devices
- If your work is starting to slow down your individual device, consider using a cluster machine or Linux server.
- These shared systems give you extra memory, CPU power, or GPU access without the complexity of an HPC queue. Great for testing and debugging.
- Cloud Solutions for Flexibility and Collaboration
- For collaborative projects, cloud resources offer flexible environments.
- These are perfect for sharing work, developing in teams, and quickly testing your code without investing in hardware.
- Unfortunately, most of the solutions are not supported by the university
- Need to Scale Up? Local HPC is the Next Step
- When your research requires heavy computation, high-end GPUs, or parallel processing, it’s time to consider our local (Tier 3) HPC resources.
- Get direct support from the Research Computing team and access systems tailored to your project’s needs.
- For the Largest Projects: External HPC
- For large, multi-node, or specialized GPU jobs, Tier-2 (Regional) and Tier-1 (National) HPC facilities offer unmatched power and scale.
- These external HPC systems are ideal for massive datasets, complex simulations, or GPU-heavy research.
Detailed Resource Overview
Following the roadmap, we dive into each section, helping you understand the specifics of each option, what benefits they bring, and how they can fit your project’s needs.
Individual Devices
Platform | Application | Information |
---|---|---|
Standard Laptops |
| Request Form |
Standard Windows Desktops |
| Request Form |
Non-Standard Computer |
| Request Form |
Windows Virtual Desktop |
| Overview |
Shared Devices
Platform | Application | Information |
---|---|---|
Clusters |
| |
Linux Workstations/Servers |
| Environment System |
Cloud Resources
Platform | Application | Features | Information |
---|---|---|---|
Google Colab | Jupyter Notebook service useful for teaching, training and development |
| Google Colab Homepage |
MyBinder | Easier to install dependencies than Colab |
| MyBinder Homepage |
GitHub codespaces | Useful for Python, non-Python, non-GPU projects and teaching. |
| |
GitHub actions | A model for well defined workflows. CI/CD lifecycle |
|
Local HPC
Platform | Application | Features | Information |
---|---|---|---|
ARC3/4† | Higher RAM/core/storage than can be achieved in previous options |
| |
Aire‡ | 11 times more FLOPS than ARC3 and ARC4 combined |
| - |
† ARC3 and ARC4 are being decommissioned and the service will be terminated late 2024
‡ Aire will replace ARC3 and ARC4 and it will be available late 2024
External HPC
Platform | Application | Features | Information |
---|---|---|---|
Bede | Running multiple GPU jobs, especially across multiple nodes |
| Bede Overview |
JADE2† | Running multiple GPU jobs - simpler than Bede due to the x86\_64 architecture |
| JADE2 Overview |
ARCHER2 | Much larger system for parallel CPU jobs |
| ARCHER2 Overview |
Jasmin | A much more comprehensive service for NERC users |
| Jasmin Overview |
† The JADE2 account registration period closed on 01/09/2024. As of this date, we are no longer able to approve project requests. 1st November 2024: Batch and interactive access to all compute resources will be withdrawn. 6th January 2025: All access to the service will be withdrawn and physical decommissioning of the system will commence.