Please use the following acknowledgement in your published papers or presentations and include the citations below where appropriate:

  • This research was supported in part through computational resources provided by Syracuse University.
  • For OrangeGrid use specifically, please acknowledge NSF award ACI-1341006 in any publications.
  • If you received support from Syracuse University’s Cyberinfrastructure Engineer (Larne Pekowsky), please acknowledge NSF award ACI-1541396 in any publications.

Syracuse University Facilities, Equipment, and Other Resources

Data Center

Syracuse University Data Center

In the fall of 2010, Syracuse University completed construction of a new state of the art data center, including 6,000 square feet of floor space and 450 kilowatts of power and cooling. It is now the main center for production computing resources, marrying research and administrative computing interests. The internal data center network is a mix of 10, 40, and 100 Gigabit connectivity, with redundant connections to all hosts and network components whenever possible. The data center supports a robust virtual private cloud where 98% of campus servers have been consolidated.

The data center is connected to the campus network and secondary data center via two bundles of geographically diverse, 144 strand fiber paths. This can be and has been used to provide direct connectivity between a researcher’s campus building and the hosted area in the primary data center.

Data Center Hosting for Researchers

Roughly half of the space in the primary data center is hosted space designed to provide a secure physical environment that is flexible enough to allow access to the equipment by researchers and graduate assistants. The hosted area is caged, and provides a separate entrance that allows access through a combination of ID cards, biometric finger prints, and PIN numbers. The majority of the computing load in the data center serves research computing needs. Researchers on campus are able to rely on having space in the datacenter that is very low cost and provides ideal environmental conditions for computing equipment with power redundancy from the multiple layers of protection including the traditional electric grid, UPS, and generator backup.

OrangeGrid

Orange Grid

The OrangeGrid high-throughput computing (HTC) cluster is comprised of over 70,000 cores. The computers in the grid are optimized to perform a large number of parallel jobs, providing high processing capacity over long periods of time. The grid utilizes a mixture of dedicated nodes (60,000 cores) and scavenged nodes (10,000 cores). HTCondor, developed with support from the National Science Foundation, manages the grid’s workload. Scavenged worker nodes are managed by HTCondor Virtual Machine Coordinator (CVMC), an application developed by SU’s Information Technology and Services department. These nodes are added to the grid by detecting when a desktop computer is idle, launching CVMC, deploying a custom virtual machine, and connecting it to HTCondor to receive work. The use of virtualization acts as a barrier that separates the researcher and their content from the user’s information on the same computer.

Zest

The Zest high-performance computing (HPC) cluster is comprised of over 17,000 cores for campus researchers which supports tying together multiple compute nodes for research work that cannot be split into smaller components or fit within a single machine. To facilitate this, Zest compute elements are interconnected with InfiniBand to pass information between nodes with much lower latency than Ethernet. Zest utilizes the SLURM scheduler, which allows researchers to scale jobs within the cluster and groups the nodes together as needed by the current jobs.

Academic Virtual Hosting Environment (AVHE)

Computers in SU's Green Data Center

The AVHE provides a private compute cloud to the Syracuse University research community, lowering the entry bar for small to medium-sized research efforts. This private cloud uses virtualization to provide flexibility and hardware sharing to allow multiple researchers to operate on an underlying server and storage infrastructure. This lower bar of entry and
flexibility provides an environment that supports both traditional and non-traditional computational research. Another use for the AVHE is building small to medium-sized clustered research computing environments, reducing the need for researchers to build and maintain physical clusters. Services within the AVHE include medium-scale storage for research data storage and archive. To date, this includes over 3 Petabytes of research data and output. The AVHE utilizes virtualization to provide high availability, which automatically migrates workloads to alternate resources in the case of physical server failure. Backup services within the AVHE are provided to all researchers.

Crush Computing Cloud

Crush Cloud

Crush is a medium scale virtualized research cloud. It is designed to be allocated for compute intensive work and to be used in tandem with the AVHE where the data and work scheduling infrastructure are maintained. A primary use case for Crush is the provisioning of a “cluster within a cluster” providing access to dedicated, customized compute nodes for high-performance and high-throughput computing. Currently Crush consists of over 25,000 cores and 150 Terabytes of memory.

SUrge

GPU processing

SUrge is a heterogeneous resource pool in support of computationally intensive research enhanced using Graphical Processing Units (GPUs). It is designed to be allocated as a stand-alone resource for smaller-scale work or for use in the OrangeGrid and Zest clusters. SUrge offers more than 250 GPUs, ranging from NVidia RTX 5000’s to 80GB A100’s.