Score:0

Why so much need for density?

cn flag

In today's world, we are looking everywhere for power savings. In the datacenter environment, I see more and more focus on the power efficiency, yet, I also see that the servers keep on being made with focus on density. I have trouble to understand the rationale here.

My point is that in high density setup, we mostly need to have small fans with high speed for the cooling. These high speed fans have a power consumption that is way higher than what few low speed fans would have.

Example : My company (small 15ppl office) bought a 5U server from DELL, it came with a 12cm fan with electrical properties : 12V 2.7A, that's 30+ Watts... The air was then ducted to a passive heatsink on the CPU. I replace it by one 12cm, 12V 0.2A fan, replaced the passive heatsink by a cooler with a 12V 0.2A fan, added 2 extra such fans at the back of the drive bays. Result : I dropped the power consumption by 60% (not accounting for the further savings thanks to PWM regulation) and temperatures are way better than before. Cherry on top, it runs silent.

My question then: Why do datacenters use high density servers? Space is cheap (most datacenters are built in countryside where land cost is neglectible and the buildings themselves can easily be built bigger), why not go for lower density, easier to cool architectue?

Or did I get it wrong in my analysis of the market?

Edit to clarify more

In other words, why do data centers choose to build 1 U servers rather than 4U servers. the latter would surely take 4 times more space, but at the end of the day, the extra cost should be limited to just extra square meters in the building. Cooling needs remain the same (as long as computing power remain the same, but because of lower density, it is way easier to achieve and could be achieved at a fraction of the power (not to mention that low density equipment is easier to maintain/repair).

Krackout avatar
it flag
I also have this query, if it would be better for heat regulation to leave spaces, 1 or 2 Us between rack-able equipment, servers, switches, etc. Yet, on a datacenter there may be enough room; but in small companies where only a small room is given as a server/computer room, space is a limited resource. This scenario, of a small company with each own IT equipment, is getting rarer, due to cloud. But space is an issue on leased datacenter space also, which is a trend, as a middle ground between in house IT equipment and full cloud services. I wonder if there are any long term analysis.
br flag
The 2.7A is maximum power draw (during startup). Afterwards the fans would use a lot less.
Memes avatar
cn flag
@Krackout yes, in datacenter, there is space, they are also the ones looking the most for power saving as the sheer scale makes it noticeable. for a small IT company, I do not think that there are companies that are rich enough to get high density servers to fill a room yet not rich enough to allocate few more square meters for it, at least, it looks like an odd place to be.
djdomi avatar
za flag
Does this answer your question? [Can you help me with my capacity planning?](https://serverfault.com/questions/384686/can-you-help-me-with-my-capacity-planning)
djdomi avatar
za flag
Space is the most expensiest on Datacenters - because it is LIMITED.
Zac67 avatar
ru flag
Requirements differ - space/size, cooling, peak/continuous performance, power consumption, noise, reliability, resilience are just some of them. You should not assume that your requirements are the most important ones for the rest of the world. Strongly opinionated question, voting to close.
Memes avatar
cn flag
@Zac67 : I understand that requirements differs, My question is more why do datacenters seems to prefer space efficiency over power efficiency. Is there another underlying reason?
Memes avatar
cn flag
@djdomi it does not answer my question, rather make it even more of a question. The answer states "RAM is cheap", "Disk is cheap" and "Electricity is expensive", well, Physical space is also cheap, so why not use more to save on the expensive electricity...
Score:1
cn flag

If your need is to save on power and make it silent, don't buy a rackmount server get a tower server.

On server side technology it's built for max performance and rackmount are designed to be stacked. So the ventilator inside your server is there because it assume you will stack it around other server that will heat. Your air conditioning will push air near the rack, and to be stacked mean you need a good air flow from the front/back.

Space is not cheap, but if it's not in your case, go tower solution.

Memes avatar
cn flag
I just took my particular example to illustrate that cooling in low density is damn easy. My point was more about why do we need to have server so densely packed, you can have the same performance in a bigger footprint. I have edited the question to make that point more clear.
yagmoth555 avatar
cn flag
@Memes The fact with 1U servers is more you can't add HDD much in it, but it got a lot of processing power.. So a full 42U rack can have in example 42 server with some hypervisor with an external SAN/storage. while in your case, only 10 server in a 4U configuration. It's a big change, and those 4U server would be empty for the HDD. So each scenario got it case.
Memes avatar
cn flag
well noted, it seems that contrary to my belief, physical space is damn expensive... surprised by the result though I oblige
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.