How Data Centers Survived Hurricane Sandy

Slashdot has been putting together local interviews of Data Centers in Delaware, New York, and other states directly in the path of Hurricane Sandy, asking them how they made it through the natural disaster, with almost complete uptime. Many of these companies providing hosting and data solutions for many others, while some have to protect only their own data. You will see how maintaining data centers is really reliant on just a couple of things: Skeleton Staff, Power (most important), and Water Leakage Prevention.

CoreSite – How CoreSite Survived Sandy – Nov 7th, 2012

IPR – How IPR Survived Sandy – November 19th, 2012

Peer1 – How Peer1 Survived Sandy – December 7th, 2012

Staff
In all cases, on-site staff was very important. The site staff was provided with food, water, and places to rest. Many times, staff would work extra hours and forgo sleep in order to ensure the stability of the environments.

Power
Power is obviously the most important part of maintaining a data center. Each data center was somehow connected to at least two data grids. While absolute fallbacks relied on a series of redundant generator systems that could operate off some form of gasoline / diesel. And while emergency fuel exists on-site, additional emergency supplies are delivered before expected issues or potential long term outages

Disaster Recovery
Only two of the three interviews had demonstrated effective disaster recovery plans. Disaster recovery is the plan in which is put in place when everything goes down, complete against all of your preparations. It isn’t surprising, but people do not think about what the consequences are when proper disaster recovery plans have not been put in place. You must have off-site storage, with servers that you can quickly boot up and provide content back. This could cause serious damage to any IT company while not up.

Flooding
Flooding seemed all but inevitable during this particular hurricane. It sounded like everyone had some amount of flooding. Some had solutions such as water pumps to get the water back out, but those were not even part of the data center’s supplies, rather the buildings.

All-in-all, it sounds like everybody made it out very well. Power and connections were maintained, and their staff is working on perfecting their plans and getting things back to normal. A win for planning.

About Phillihp Harmon

I'm Phillihp. My name can be spelled the same way forwards and backwards, so can my posts... if you wish. I'm out here exploring, learning, and sharing what I find. This is more for fun and personal growth, I aim to be as consistent as possible, so check back daily!
This entry was posted in ***, Architecture, Cloud Computing, Clusters, Companies, Hardware, Internet, News. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


*