Updated: Oct 7
As a technical resource coming from the storage world, I always have considered servers to be a commodity necessary to run applications, meaning that, for me, a server was just a server as all servers in the market share the same components.
As I am diving deeper into the PowerEdge world though, I am seeing some significant differences between PowerEdge and the competition in terms of hardware and management. True, a lot of the components, such as CPU, memory, network cards, etc… are similar between PowerEdge and Proliant and UCS, but the environment in which those components operate makes all the difference between the various server vendors.
In this blog, I will be going through that environment, at a high level, and showing where PowerEdge is different. In further blogs, I will be diving deeper into some of those unique features as well as PowerEdge management, show how they are different and what they mean to a customer
Let’s dive into what I have discovered. Every major performance increase in processing power has typically gone hand in hand with increase in power consumption and heat generation. The jump from Cascade Lake to Ice Lake is no exception as the top Ice Lake Xeon Scalable processor is a hungry beast with a maximum power consumption of 270W, compared to the 205W of the top Cascade Lake processor.
The chart below shows how each processor performs in terms of power efficiency, using the industry standard benchmark SPECpower.
As one can see, Cascade Lake can be up to 80% more power efficient than Ice Lake, especially for small and medium workloads. At less than 1M Ops/sec, Ice Lake draws 80% more power than Cascade Lake. The gap between the 2 shrinks as the number of Ops/sec increases until it completely disappears around the 6M Ops/sec mark, but the power consumption of Ice Lake above the 6M Ops/sec mark increases faster than the performance gains.
To manage that power consumption and its associated heat generation, I discovered that Dell had to introduce 3 main innovations to PowerEdge:
Multi-Vector Cooling 2.0
Direct Liquid Cooling
NextGen PSU with 92% level of efficiency
I will cover each of them in a little more details, although I will have a further blog on liquid cooling and why it is such a major breakthrough for servers.
Let’s start with Multi-Vector Cooling 2.0. Now more than ever, separate areas within a server chassis will have different cooling requirements. As shown above, the Ice Lake Xeon Scalable processors generate a fair amount of heat and thus will require more cooling than network cards, which runs fairly cool. Similarly, GPUs can run even hotter than processors and thus require additional cooling. This is achieved by having temperature sensors within the chassis and being able to control the speed of individual fans. This also requires optimized airflow within the chassis achieved by leveraging recycled plastic air shrouds. The picture provides a great illustration of this.
Let’s now talk about Direct Liquid Cooling (DLC)
Liquid cooling is not a new concept with personal computer supporting All-In-One liquid coolers since 2009 and mainframe and supercomputers having been liquid cooled since 1980s, but the vast majority of datacenters are air-cooled, and therefore the majority of servers are air-cooled. Unfortunately, air cooling can only go so far as air is not as efficient as some liquid in absorbing heat, hence why supercomputers that have massive cooling requirements have been using liquid cooling.
The heat generated by the new Ice Lake Xeon Scalable processors and the democratization of GPUs is pushing the boundaries of what air cooling can do, which is why Dell is generalizing Direct Liquid Cooling across its server line to ensure that components are being cooled effectively and operating within their optimum parameters. I will be writing a further blog about the dramatic effect of DLC for analytics and AI workloads compared to air cooling.
Datacenters supporting liquid cooling today are few and far between, but their numbers are increasing every day and, yes a customer might not be able to support liquid cooling today, but it being available ensures that tomorrow, when their datacenter does support it, they can take advantage of it.
Both Multi-Vector Cooling 2.0 and Direct Liquid Cooling sound like great innovations, but they raise main 2 questions:
Why are these important?
What does these mean for customers?
Let’s look at the answers to those questions, starting with why are these important? The answer is because keeping a server cool achieves a couple of things:
It extends the life of the components within that server, which translate into less failures, which leads to improved uptime
It ensures the components are able to function at their peak performance. Heat throttling is a reality nowadays and can severely impact the performance of a server. Why would a customer purchase a high-end server, only to have it heat-throttled when maximum performance is required?
In datacenters where cooling requirements and power consumption are already an issue, having optimized cooling and power helps stretch those resources further and thus allowing better density within datacenters.
In conclusion, and perhaps most importantly, what does all of this means for our customers? It means that they can get the best performance out of their investment while ensuring they only use the necessary resources required to achieve that best performance and not more. This means better utilization of their datacenter resources, better density in their datacenter and increased life of their datacenter investment.
Going back to the point I used when I started this blog, which is how can PowerEdge be different from Proliant or UCS? The answer is that, by providing innovation in areas such as cooling, PowerEdge allows customers to get more performance out of the components that are used by everybody else.