SUrge: GPU Processing Power

What is a Graphics Processor Unit (GPU) Cluster and why was it developed? 

SUrge was developed to assist researchers with computational research tasks that can take substantive advantage of the speed-up GPU’s can provide. We developed the cloud methodology to make allocating the GPU’s to a number of different workloads a simple process. 

Access to GPUs will enhance research opportunities for Syracuse students. Undergraduates and graduate students will gain practical experience with cutting-edge computing architectures.

NSF award enhances Syracuse University computational capabilities through GPU cluster

Frequently Asked Questions:

What specific computing needs does it serve?
SUrge is designed to provide GPU access to researchers to significantly increase computation capabilities, allow for development utilizing CUDA/OpenCL, and gain familiarity with GPU capabilities.  Additionally, GPU’s are used in a wide variety of academic areas; they provide a significant speed increase over traditional CPU’s for certain types of mathematical operations. SUrge is also available for rendering and photogrammetry projects of all scales and sizes.
What hardware and system configurations are available?
Over 300 GPU’s are available through SUrge in diverse configurations. 

GPU models include: 

  • NVIDIA RTX A6000
  • NVIDIA RTX 6000
  • NVIDIA RTX 5000
  • NVIDIA GeForce GTX 1080 Ti
  • NVIDIA GeForce GTX 750 Ti

System Configurations:

  • Both Linux and Windows operating systems are supported 
  • Nodes can range from 1 to 16 GPUs and from 1 to 32 cores
  • The appropriate CPU to GPU ratio is dependent on your specific application
  • GPU’s have from 2 – 48 GB of memory depending on the model
  • Both CUDA and OpenCL are supported as GPU programming languages 
Who can use this system?

SUrge is open and free of charge to all researchers affiliated with Syracuse University.

Who can provide assistance in using this system?

For help contact our research computing group – email researchcomputing@syr.edu. 

Does this system replace or complement OrangeGrid?

Both!  SUrge is available as standalone virtual machines and also as part of OrangeGrid.


SUrge Hardware and Rendering Images

Crush: Virtualized Research Cloud

Crush is a virtualized research cloud. It is designed to be allocated for compute intensive work and to be used in tandem with the AVHE where the data and work scheduling infrastructure are maintained.  It provides both HPC and HTC environments.  

A primary use case for Crush is the provisioning of a “cluster within a cluster” providing access to dedicated, customized compute nodes for high-performance and high-throughput computing. 

Nodes within Crush provide a diverse set of configurations including high density processing power, high speed disk I/O, and high speed networking. 

Currently Crush consists of tens of thousands of cores and hundreds terabytes of memory 

Crush is a dynamic resource, simultaneously supporting a wide variety of workloads, balancing allocation based on use and priority.  A snapshot of recent workload includes support for Syracuse researchers in many areas across campus.  

  • Physics – Arts and Sciences  
  • Chemistry – Arts and Sciences  
  • Biology – Arts and Sciences
  • Math – Arts and Sciences
  • Engineering and Computing Science
  • Maxwell School of Citizenship and Public Affairs 
  • Whitman School of Management
  • Architecture
  • TransMedia – Visual Performing Arts

Nationally, Crush contributes computational time to the Open Science Grid.

Research Software: Your Toolbox for Getting the Job Done

Software Listing 

We maintain and can install a wide variety of software in our computing environments to meet your research needs.  Please note the expectation is that the researcher is already proficient in its use.   

Below is a sample of applications and tools used within the research environments. 

SoftwareApplication
BCFtoolsBioinformatics
BLAST+Bioinformatics
BlenderGraphical Rendering
Bowtie 2Bioinformatics
BowtieBioinformatics
BWABioinformatics
Crystal14Quantum chemistry, Molecular mechanics
CufflinksBioinformatics
FastQCBioinformatics
FASTX-ToolkitBioinformatics
GAMESS-USQuantum chemistry, Molecular mechanics
GaussianQuantum chemistry, Molecular mechanics
HTSlibBioinformatics
NWChemChemistry
OrthoDBBioinformatics
PhotoScanPhotogrammetry
ProteinorthoBioinformatics
PythonScripting / Programing
Reality CapturePhotogrammetry
SamtoolsBioinformatics
Tensor FlowMachine Learning
TrimmomaticBioinformatics
TrinityRNASeqBioinformatics

Syracuse University’s Data Center

About the Data Center

In the fall of 2010, Syracuse University completed construction of a new state of the art data center, including 6,000 square feet of floor space and 450 kilowatts of power and cooling. It is now the main center for production computing resources, marrying research and administrative computing interests.

The internal data center network is a mix of 10, 40, and 100 Gigabit connectivity, with redundant connections to all hosts and network components whenever possible. The Data Center supports a robust virtual private cloud in which 98% of campus servers have been consolidated.

Syracuse University Campus Cyberinfrastructure Plan

The Data Center is connected to the campus network and the original data center via two bundles of geographically diverse, 144 strand fiber paths. This can be and has been used to provide direct connectivity between a researcher’s campus building and the hosted area in the Data Center.

Quick Facts:

Over 120,000 linear feet—more than 22 miles—of wire was used in the construction of the electrical systems. Almost one mile of piping is used in the heating and cooling systems.

Because the Data Center was constructed in accordance with LEED “Building” principles, more than 99 percent of all construction waste generated so far has been recycled. That’s over 1,200 tons (about 60 truckloads) of waste that did not go to a standard landfill.

More than 25,000 linear feet of electrical conduit, equivalent to about 83 football fields or 4.5 miles, has been used on the project.


Exterior of Data Center
The internal Data Center network is a mixture of 10, 40 and 100 Gigabit connections with redundant connections to all hosts and network components whenever possible.

The Data Center supports a robust virtual private cloud in which 98% of campus servers have been consolidated.   This includes servers (both research and operational) on campus that were housed in poor environmental conditions within distributed locations.

Syracuse University Campus Cyberinfrastructure Plan

Data Center Hosting for Researchers

Syracuse University Data Center

Roughly half of the space in the Data Center is hosted space designed to provide a secure physical environment that is flexible enough to allow access to the equipment by researchers and graduate assistants. The hosted area is caged, and provides a separate entrance that allows access through a combination of ID cards, biometric finger prints, and PIN numbers. Over two-thirds of the computing load in the Data Center serves research computing needs. Researchers on campus are able to rely on having space in the Data Center that is very low cost and provides ideal environmental conditions for their equipment with power redundancy from the multiple layers of protection including the traditional electric grid, a dual UPS configuration and generator backup.

Disaster Recovery (DR)

Machinery Hall (MH), the older data center utilized prior to the construction of the Data Center, has been repurposed into a DR site. Backup computing and storage capacity purposed for use in case of a disaster is currently available at the MH location. Anticipating that this computing capacity will typically be sitting idle, it is leveraged via a private virtual cloud for research computing. Researchers are able to use this capacity free of charge for their academic work.

Academic Virtual Hosting Environment (AVHE)

The Academic Virtual Hosting Environment (AVHE) provides a private computing cloud to the Syracuse University research community, lowering the bar of entry for small to medium sized research efforts.  The AVHE platform provides two Petabytes of storage, 1000 cores, and 25TB of memory for research use. 

The AVHE is utilized for computational work or data storage from every college of school on campus and is an integral part of the University’s research computing environment.

It is very flexible and services many types of workloads including statistical analysis, 3D rendering and visualization and supports researchers with single nodes to small clusters.

The AVHE uses virtualization to provide flexibility and hardware sharing to allow multiple researchers to operate on an underlying server and storage infrastructure.  This lower bar of entry and flexibility provides an environment that supports both traditional and non-traditional computational research.  The AVHE supports Windows and Linux operating systems with a dynamic resource allocation model for easy scaling and configuration. Beyond supporting individual workloads, another use case for the AVHE is building small to medium sized virtualized research computing clusters, reducing the need for researchers to build and maintain small physical clusters. 

The AVHE utilizes the abilities of virtualization to provide high availability, which automatically migrates workload to alternate resources in the case of physical server failure.  Backup services within the AVHE are a standard service provided to researchers.