Resources

LONIR has a temporary data center located in the Soto 1 building built during the summer of 2013.  The physical space in Suite 101 was converted from an open cubicle space to a fully functional data center with backup generator power in approximately 3 weeks.  Over 90% of the new IT environment was built de novo using the latest state-of-the-art processing and storage equipment from Cisco, EMC/Isilon, F5, and Dell.

The new infrastructure is built on a combination of Microsoft Server 2012, Microsoft Exchange 2013, Microsoft Identity Management for Unix, Centos (now unified revision for a dramatic reduction in complexity), and limited use of Ubuntu/Debian core servers. There was a strong emphasis in the design for leveraging the latest stable platforms to minimize need for upgrades while taking advantage of the broader capabilities of these latest offerings. The decision to eschew legacy Linux revisions in favor of compatibility workarounds was made to remove needless complexity from the system while keeping the greater construct at the same patch and upgrade pace, thus limiting the potential for unknown intrusion vectors.

Deployment of the High Performance Computing (HPC) cluster took 1 week to beta and 1 more week to production because of the integration between xCat (IBM open-source product) and Cisco UCS chassis integration.  In contrast, a traditional deployment of an environment this size requires more than 6 months.

LONIR’s temporary data center contains 216 Cisco blade servers between the HPC environment (208 blades) and the virtualized infrastructure blades (8).  The HPC environment has 3,328 physical cores and 26,624GB of aggregate memory (an even 26TB).  The virtualization/infrastructure environment has 128 physical cores (hyperthreading adds another 128 virtual cores for the VM environment) and 1024GB (1TB) of aggregate memory.  Each Cisco UCS chassis holds 8 blades and has eight (8) 10g network connections. The potential aggregate connectivity of the Cisco machines is 16.875 Petabits. The VNX SAN cluster has an available storage capacity of 15TB between 15k RPM SAS disks and tiered SSD storage. The Isilon storage cluster has an available/online storage capacity of 2.5pb across 23 nodes. Each node has 2x 10g network connections. The Isilon cluster’s potential aggregate connectivity is 460gigabit. Backup services are handled through 2 LTO5 drive tape library and clones of critical data are sent offsite weekly for secure storage with Iron Mountain.

The VMWare environment spans the above-listed 8 Cisco blades and currently houses 115 virtual servers. The net rate of growth of the virtual environment is approximately 1 server every 1.5 days.

The infrastructure’s physical server requirement was condensed from more than 60 full-width servers in the previous data center down to 8 blades and 3 full-width servers, reducing the heat generation of the infrastructure alone from 246,000 BTU to 20,000 BTU (92% reduction) without compromising availability or capability.

Space savings: The current datacenter accomplishes as much work with 12 cabinets as was done in the previous 2 datacenters utilizing over 50 cabinets.

The LONIR IT team creates, on average, a new management tool every two weeks.  These tools are used to improve the speed of deployment, efficiently ensure normalization of the environment, and automate as many tasks as possible to continually improve the infrastructure’s scalability while drastically reducing direct management overhead.

Soto DC outside networking: LONIR currently has two (2) outbound 10g connections and one (1) dedicated 10g connection to ISI.  We are planning for four (4) outbound 10g connections for the Soto data center.  When the Raulston data center is completed, we anticipate having six (6) 10g connections or redundant 100g connections.

Resource Capabilities

LONIR develops and disseminates to the community computational tools for neuroimaging and brain mapping. In addition, the resource provides state-of-the-art grid computing resources, data storage, and archival to all of our collaborators.

You must register to access the following resources: