Please use the following acknowledgement in your published papers or presentations and include the citations below where appropriate:

  • This research was supported in part through computational resources provided by Syracuse University.
  • For OrangeGrid use specifically, please acknowledge NSF award ACI-1341006 in any publications.
  • If you received support from Syracuse University’s Cyberinfrastructure Engineer (Larne Pekowsky), please acknowledge NSF award ACI-1541396 in any publications.

Syracuse University Facilities, Equipment, and Other Resources

Data Center

Syracuse University Data Center

In the fall of 2010, Syracuse University completed construction of a new state of the art data center, including 6,000 square feet of floor space and 450 kilowatts of power and cooling. It is now the main center for production computing resources, marrying research and administrative computing interests. The internal data center network is a mix of 10, 40, and 100 Gigabit connectivity, with redundant connections to all hosts and network components whenever possible. The data center supports a robust virtual private cloud where 98% of campus servers have been consolidated.

The data center is connected to the campus network and secondary data center via two bundles of geographically diverse, 144 strand fiber paths. This can be and has been used to provide direct connectivity between a researcher’s campus building and the hosted area in the primary data center.
Data Center Hosting for Researchers

Roughly half of the space in the primary data center is hosted space designed to provide a secure physical environment that is flexible enough to allow access to the equipment by researchers and graduate assistants. The hosted area is caged, and provides a separate entrance that allows access through a combination of ID cards, biometric finger prints, and PIN numbers. The majority of the computing load in the data center serves research computing needs. Researchers on campus are able to rely on having space in the datacenter that is very low cost and provides ideal environmental conditions for computing equipment with power redundancy from the multiple layers of protection including the traditional electric grid, UPS, and generator backup.

 

OrangeGrid

Orange Grid

This distributed computing system, comprising some 15,000 cores, is used by SU faculty and researchers, particularly in the physical sciences and engineering, who need reliable, high throughput computing (HTC). The computers in the grid are optimized to perform a large number of smaller parallel jobs (typically less than 24 hours), providing high processing capacity over long periods of time. The grid utilizes virtualization via Oracle’s VirtualBox, scheduling via the HTCondor HTC System, and the HTCondor Virtual Machine Coordinator (CVMC), a small application developed by SU’s Information Technology and Services (ITS) team to manage the multiple desktop components. These components are distributed to desktop clients via Microsoft’s Active Directory. HTCondor, developed with support from the National Science Foundation, manages the grid’s workload. The computer’s task scheduler detects when its host computer is idle, starts up CVMC, and connects to HTCondor to receive work. When user activity is detected on the computer, research operations are immediately stopped. The use of virtualization acts as a barrier which separates the researcher and their content from the user’s information on the same computer.

 

Academic Virtual Hosting Environment (AVHE)

Computers in SU's Green Data Center

The AVHE provides a private compute cloud to the Syracuse University research community, lowering the bar of entry for small to medium sized research efforts. The private cloud uses virtualization to provide flexibility and hardware sharing to allow multiple researchers to operate on an underlying server and storage infrastructure. This lower bar of entry and flexibility provides an environment that supports both traditional and non-traditional computational research. Another use case for the AVHE is the building of small to medium sized clustered research computing environments, reducing the need for researchers to build and maintain small physical clusters. Services within the AVHE include medium scale storage for research data storage and archival. To date this includes over a half of a Petabyte of research data and output. The AVHE utilizes the abilities of virtualization to provide high availability, which automatically migrates workload to alternate resources in the case of physical server failure. Backup services within the AVHE are a standard service provided to researchers.

Crush Computing Cloud

Crush Cloud

Crush is a medium scale virtualized research cloud. It is designed to be allocated for compute intensive work and to be used in tandem with the AVHE where the data and work scheduling infrastructure are maintained. A primary use case for Crush is the provisioning of a “cluster within a cluster” providing access to dedicated, customized compute nodes for high-performance and high-throughput computing. Currently Crush consists of over 25,000 cores and 150 Terabytes of memory.

SUrge

GPU processing

SUrge is a heterogeneous resource pool in support of computational intensive research that is enhanced by use of Graphical Processing Units (GPUs). It is designed to be allocated as a stand-alone resource for smaller scale work or in tandem with the AVHE for larger scale allocation via OrangeGrid. SUrge offers more than 250 GPUs providing over a petaflop of computing calculations per second.