Holyoke Data Center (MGHPCC)
Holyoke Data Center
MIT is one of five partner universities (along with Harvard, UMass, Northeastern, and BU, as well as the state government) in the [Massachusetts Green High-Performance Computing Center], which is located on the Connecticut River in downtown Holyoke, MA. MIT’s participation in MGHPCC is managed jointly by IS&T and the VP of Research; in the VPR’s office, ORCD manages space allocation in MIT’s section of the facility.
Facility information
MGHPCC is a modern, new-construction, LEED Platinum-certified, 90,000-square-foot data center facility, with 24-hour security and strictly controlled access. A limited number of personnel from each member institution have ID cards that allow access to the security desk, at which point they may pick up a set of keys allowing access to their institution’s facilities. There is in addition a shared office space with a kitchen and a private conference room. A staging room is provided for unboxing and initial setup of equipment. Guests and service providers may work in the facility when accompanied by an authorized tenant representative.
Network connections
MIT provides network connectivity for all institutions at MGHPCC (except UMass) via the regional optical network. The long-haul network consists of two paths, one that runs parallel to the Massachusetts Turnpike (the “short path”) and one that runs parallel to I-95 through Chepachet, R.I., and New York City (the “long path”). The optical network uses DWDM to multiplex multiple 10-gigabit circuits over each path, and optical equipment at 300 Bent St. demultiplexes these circuits for delivery to each institution. Each institution’s network traffic is backhauled separately from Holyoke – only the physical layer is shared – and terminates on their own equipment. There is a shared “meet me” switch, operated by the facility managers, which can be used for institutions to interconnect at layer 2 or layer 3, but use of this has been stymied by policy differences at the various institutions.
CSAIL maintains its own network at MGHPCC, which is backhauled to Stata on 10-gigabit optical circuits via 300 Bent St. and building 24. This is a network-layer (layer 3) connection; TIG operates a 10-gigabit switch in MGHPCC (with out-of-band remote access and UPS power) to provide connectivity for CSAIL users in the facility. CSAIL users will not be on the same network as they currently use in Stata.
Electrical power
MGHPCC has a 15-megawatt utility connection from Holyoke Gas & Electric, about 70% of which is sourced locally from hydroelectric facilities on the Connecticut River. Of that, 10 MW is dedicated to computing systems (as opposed to infrastructure services like cooling, lighting, and security). Approximately one third of the facility capacity has UPS backup; the remaining two thirds receive conditioned utility power. The UPS system consists of a flywheel energy storage system to provide short-term (several seconds) emergency power, coupled with a generator system for longer-term outages. Note well: the MGHPCC electrical system lacks a redundant maintenance bypass. As a result, once a year there is a planned 24-hour full facility shutdown (including UPS power) for electrical-system maintenance.
Air handling
MGHPCC uses a “hot aisle” containment design with in-row cooling. Each pod consists of two rows of cabinets, facing each other back-to-back across a service aisle which is enclosed at both ends and both above and below. Four (six?) cabinets in each pod are dedicated to cooling, with thermostatically controlled variable-speed fans pulling air out of the hot aisle into the room. The system is engineered to maintain an ambient air temperature of 70–75°F. Normally mounted equipment should be designed for front-to-back ventilation; if any equipment is to be mounted on the rear rails of the cabinet, it should either be passively cooled, or be designed for back-to-front ventilation.
Shipping details
A loading dock is open to accept deliveries of equipment during regular business hours, and users are encouraged to have their equipment shipped directly there when possible. The shipping address is:
ATTN: Steven Ruggiero MIT CSAIL MGHPCC 100 Bigelow St. Holyoke, MA 01040 phone: 413-552-4900
When identifying the location of equipment for inventory purposes, the MIT building number for MGHPCC is OC40.
Remote access
“Remote hands” services in Holyoke are available from private contractors. In addition, three TIG sysadmins have access to the facility and can be on site in the case of an emergency, but this is effectively an all-day exercise because of the travel time. For this reason, servers that require day-to-day hands-on access should remain on campus. TIG also has an out-of-band console server to manage CSAIL infrastructure at the facility; however, this server is not available to end users, and you should plan on providing your own console access (using IPMI or similar remote-access management systems) over a local (unrouted) private network. TIG will provide a remote-console VLAN and a bastion host for client use.
CSAIL facilities at MGHPCC
In addition to the network switches and remote console mentioned above, TIG operates NTP, DNS, and DHCP servers for client use at MGHPCC, as well as CSAIL Group scratch and NFS servers.
Process for getting space in MGHPCC
CSAIL members interested in placing equipment at MGHPCC should
Start here: Contact help@csail.mit.edu as soon as you think you need hardware. Even if you’re just exploring options, TIG can save you time and money. See Server Purchasing Support and Strategy for details.
Once TIG receives the request, we will coordinate the allocation of space and other resources, and arrange for movers (if necessary) to deliver the equipment to MGHPCC once the space is ready to accommodate it.
IPMI (Remote Connection to Management Port on Machines In Holyoke)
For each machine send to Holyoke we configure IPMI (The Intelligent Platform Management Interface). Among other things, this is will give users the ability to hard reboot machines that are unresponsive.


