Data Centers

Data Center Questions

  1. Are any of you considering hot/cold aisle containment? If yes, how are you approaching the fire hazard issue presented by aisle containment?
  2. Is there a standard Server efficiency metric? Since server performance is judged on different criteria (processing speed, RAM, etc), I am not sure what to use as the appropriate denominator when ranking the efficiency of servers and trying to find the best-in-efficiency class options. Note: this question is different from general data center efficiency metrics, such as PUE: I am looking specifically at the hardware efficiency evaluation
  3. Is a 30 foot ceiling too high for a data center? Given that a rack is about 7-foot tall, why would anyone design a data center 4 times higher? Doesn't this mean that there is that much more space to cool (if no aisle containment is in place)?
  4. Has anyone looked into dynamic UPS systems such as rotating flywheels ? I'm trying to find general information (how it works, etc) as well as pricing information to see if it makes sense in one of our data center.
  5. Is there a legal minimum number of foot candle output in data center/labs in the US How much foot candle is a "normal range"?
  6. What is a PDU and how can it help with energy efficiency?

Data Center Answers

1. Q: Are any of you considering hot/cold aisle containment? If yes, how are you approaching the fire hazard issue presented by aisle containment?

A: A few approaches include:

  • Installing sprinklers within the aisles
  • Check to see if the fixtures keeping the panels are heat sensitive and would melt in case of a fire, allowing fire sprinklers to do their job. If the fixtures aren't heat sensitive, one can install temperature sensitive plastic dividers that melt above a certain temperature to allow fire system to reach the area.

2. Q: Is there a standard Server efficiency metric? Since server performance is judged on different criteria (processing speed, RAM, etc), I am not sure what to use as the appropriate denominator when ranking the efficiency of servers and trying to find the best-in-efficiency class options. Note: this question is different from general data center efficiency metrics, such as PUE: I am looking specifically at the hardware efficiency evaluation

A: The assumption is that you are trying to find out the server energy efficiency metric to either calibrate the existing servers and propose an investment option for a set of new energy efficient servers OR you are trying to simply compare two servers' energy efficiency.

There are currently two major sources for estimated server power consumption: 1) The Neal Nelson Power Efficiency Test and 2) the SPECpower Benchmark. Both tests provide information about server power consumption at various “percent busy” load levels. So the metric probably should be Watts / (percent busy levels).

So all you have to do is to create a load profile of the server's (% busy) values. Prepare class intervals (as per your choice - Neal Nelson or SPECpower) and then multiply the power consumed with the % busy values obtained from your server's load profile. This way you will be able to tell the current energy consumption as well as compare it with a new server after adjustments for the configuration.

3. Q: Is a 30 foot ceiling too high for a data center? Given that a rack is about 7-foot tall, why would anyone design a data center 4 times higher? Doesn't this mean that there is that much more space to cool (if no aisle containment is in place)?

A: According to experts at Rocky Mountain Institute, a high ceiling is actually a good practice, if not a "best practice." The higher ceiling allows the air to naturally stratify (hot air rises). This helps remove hot exhaust air from the server outlets, while keeping cool air lower. If the return air stream is taken from the top of the room and cold air is supplied underfloor, a certain degree of hot/cold separation can be achieved without any physical barriers.

Still, I would expect physical aisle containment to provide savings. The issue is not so much reducing the amount of air that you need to cool (since there shouldn't be significant heat loads in the high ceiling). Instead, it's the usual benefits of increasing the temperature drop across the chiller and reducing air flow volume. (20% Chiller savings and 25% fan savings, according to PG&E). But since the current design is somewhat more efficient that a base-case design, the savings would not be as great. I don't have specific data on this. You might try searchdatacenter.com, techtarget.com or datacenterknowledge.com or just search google.

Two other considerations: Fire sprinkler issues could be a bigger problem with higher ceilings. You might want to do some research or make some calls to experts on that topic. Secondly, the extra ceiling space may help implement enclosed hot aisles because there is plenty of room up there to hang exhaust air return ducts.

4. Q: Has anyone looked into dynamic UPS systems such as rotating flywheels ? I'm trying to find general information (how it works, etc) as well as pricing information to see if it makes sense in a data center.

A: The UPS is primarily a power storage system (battery or flywheel based) which is there to fill the momentary gap (usually only a few seconds) before a data center's back-up generators kick in following a power outage.

  • Batteries in a normal UPS systems provide reliability that if sudden catch in power supply, battery will cover it – stored electricity
  • Fly wheels serves the same purpose. They store energy with phsycial momentum – large mass that’s spinning around constantly. If power cuts out, the momentum of the spinning wheel is transferred to generator
  • A fly wheel can’t supply power for as long. Only long enough to get diesel generators on line. Batteries can last hours. In some data centers maybe 30 min. Fly wheels can only last the minutes it can supply full load of DC.

Below are a few resources on this topic:

  • This article was a good starting point when trying to understand the pro's and con's of it (its kind of old, but will get you started).
  • Also, a great paper from APC on the different options:

5. Q: Is there a legal minimum number of foot candle output in data center/labs in the US? How much foot candle is a "normal range"?

A: The link below is where I found my information but it doesn’t specify for data centers. I think you will have to check with the IESNA design guide for the rest. A contractor, engineer or data center lighting designer should have this.

[http://www.xtralight.com/footcandle.aspx]

6. Q: What is a PDU and how can it help with energy efficiency?

A: Power Distribution Units (PDUs) are intelligent power strips that provide real-time remote load monitoring of connected equipment and individual outlet power control for remote power recycling of lab equipment. PDUs enable lab managers to monitor and control energy use within the lab environment. It is important to note that installation of a PDU will not automatically generate energy savings. The PDU must be programmed with other scripts to manage power distribution and to turn equipment on and off when appropriate. PDUs have the potential to assist in energy reduction, however actual energy savings depend on how much the lab equipment is utilized. If all of the equipment must be running 24/7, PDUs will not be able to contribute to energy savings. If, on the other hand, lab equipment is only utilized 70% of the time, PDUs can help turn the equipment off for the 30% of the time when the equipment is not in use. Thus, PDUs can indirectly provide both energy and cost savings.

Resources

Wikipedia's Lighting site

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License