
OrangeGrid Computing
Almost any computational problem can be solved on a single computer. However, when you encounter problems that are too large (or take too long) to be solved on a single machine you will need to use a computing cluster to complete your task.
There are generally two types of jobs that you will run on computing clusters:
HTC: High-throughput computing
Running large numbers of parallel applications on compute clusters is called HTC
Embarrassingly parallel applications are algorithms that require large-scale compute resources to complete, but require very little (or no) interprocess communication. Examples of embarrassingly parallel algorithms are performing the same, independent calculation over a large parameter space, or event simulation in particle physics. These types of applications are suitable to almost any large-scale computing cluster.
HPC: High Performance Computing
Running large parallel applications on a supercomputer is called HPC
Truly parallel applications require a substantial amount of interprocess communication between the compute cores during execution. Examples of such applications include fluid dynamics or numerical relativity where a set of non-linear differential equations must be solved over a large physical domain. The domain can be divided up between processors, but boundary conditions must be communicated between each domain to ensure that the solution is physically realistic. Writing parallel applications typically requires the use of a dedicated programming language or a language extension such as MPI 1. Efficiently running parallel applications requires dedicated computing resources with shared memory or very fast processor interconnects.