Cluster Computing Basics

Why Cluster Computing?

Almost any computational problem can be solved on a single computer. However, when you encounter problems that are too large (or take too long) to be solved on a single machine you will need to use a computing cluster to complete your task.

There are generally two types of jobs that you will run on computing clusters:

HTC: High-Throughput Computing (HTC) with OrangeGrid 

Orange Grid

Running large numbers of parallel applications on compute clusters is called HTC

Embarrassingly parallel applications are algorithms that require large-scale compute resources to complete, but require very little (or no) inter-process communication.  Examples of embarrassingly parallel algorithms are performing the same, independent calculation over a large parameter space, or event simulation in particle physics.  These types of applications are suitable to almost any large-scale computing cluster.

HPC: High-Performance Computing (HPC) with Zest

Running large parallel applications on a supercomputer is called HPC

Truly parallel applications require a substantial amount of inter-process communication between the compute cores during execution.  Examples of such applications include fluid dynamics or numerical relativity where a set of non-linear differential equations must be solved over a large physical domain.  The domain can be divided up between processors, but boundary conditions must be communicated between each domain to ensure that the solution is physically realistic.  Writing parallel applications typically requires the use of a dedicated programming language or a language extension such as MPI.  Efficiently running parallel applications requires dedicated computing resources with shared memory or very fast processor interconnects.

Want to get started? The research computing team is happy to match the correct cluster resource to your unique computing needs. Contacting us today!