Data Center Efficiency: Five Things That Move PUE
Five efficiency levers that actually move the PUE needle — and three popular ones that mostly do not.

Data center efficiency discussions are usually more aspirational than operational. Everyone wants a PUE under 1.2, most facilities deliver between 1.5 and 1.9 in practice, and the difference between the two is almost always a handful of specific engineering choices — not a wholesale redesign. After 23 years of building and operating data halls for customers, here are the five levers that we have seen actually move the PUE needle, and a few popular ideas that mostly do not.
Understanding the Number First
PUE (Power Usage Effectiveness) is total facility power divided by IT power. A PUE of 1.5 means that for every watt delivered to a server, another half-watt is spent on cooling, lighting, UPS losses, and everything else. The industry obsession with PUE has produced some honest progress and a lot of misleading marketing — facilities that report 1.1 PUE at a 10 percent load in a perfect ambient climate are technically telling the truth and practically telling a story.
The more useful questions: what is the PUE at realistic load (70 to 80 percent), averaged across a full year, including the worst month? Those are the numbers operators live with.
Lever 1: Raise the Chilled Water Temperature
The single most impactful efficiency change we make in existing facilities. Most legacy chilled water plants were designed for 7 C supply / 13 C return — numbers borrowed from commercial HVAC. Modern IT cooling can tolerate supply water at 18 to 22 C without any loss of capacity, and the chiller plant efficiency goes up dramatically as you raise the setpoint.
The math is straightforward: a chiller running at 18 C supply consumes roughly 30 to 40 percent less energy than the same chiller at 7 C supply for the same cooling load. Across a 24x7 operation that compounds fast. Combined with an economizer, warm-water operation enables free cooling for substantially more of the year — in most US climates you can run mechanical cooling less than a third of the hours per year.
The catch
Your cooling equipment has to be rated for warm-water operation. CRAC units designed for 7 C water will not hit nameplate capacity at 18 C. Retrofitting is an engineering exercise, not a setting change, and it may require new coils or replacement units. But for a new build or a major refresh, warm water is almost always the right call.
Lever 2: Hot Aisle Containment, Done Right
Containment is the textbook answer and it deserves its reputation — when executed properly. "Properly" is where most facilities fall short. Proper containment means: sealed ceilings or aisle doors, blanking panels in every empty rack U, brush grommets on every cable penetration, gaskets on the raised floor tiles, and ongoing discipline about keeping all of those in place as racks are added and removed.
We walk into older facilities all the time with chicken-wire ceilings and half the blanking panels missing. Adding or fixing containment in those environments is typically a 0.10 to 0.25 PUE improvement for a few thousand dollars of materials and a weekend of work. It is the highest return on investment intervention we know of, and it is always available.
The discipline problem
Containment degrades constantly. Every technician who pulls a server and does not re-blank, every rearrangement of cable trays that creates a new gap, every air dam that gets left open for convenience. The facilities that sustain good containment run quarterly thermal imaging audits and treat containment discipline like cleanroom discipline. The ones that do not, lose the gains within a year.
Lever 3: Right-Size the UPS
UPS systems are efficient at high load and inefficient at low load. A typical double-conversion UPS running at 25 percent load is 88 to 92 percent efficient; running at 75 percent load it is 94 to 96 percent efficient. The difference on a 500 kW IT load is 20 to 30 kW of pure waste heat that still has to be cooled.
Most data centers are overbuilt on UPS capacity because the original design assumed growth that did not materialize, or assumed a density that was later revised. The fix is not always easy — you cannot just unplug half the UPS — but modular UPS topologies (Eaton, Vertiv, Schneider all offer them) let you right-size the load factor and ECO mode options allow the UPS to bypass double conversion during stable utility conditions for an additional efficiency bump.
Flywheel and lithium tradeoffs
Lithium-ion UPS batteries are becoming standard for new builds and they are worth it — smaller footprint, longer life, and better tolerance for discharge events. Flywheel UPS systems were trendy a decade ago and have fallen out of favor in most applications; they shine for ride-through during utility sags but do not substitute for battery backup if your runtime requirement is more than about 15 seconds.
Lever 4: Economizer Hours
Free cooling — using outside air or water to reject heat without running a compressor — is the single largest efficiency opportunity for any facility in a moderate or cold climate. Air-side economizers exchange filtered outside air directly with the white space; water-side economizers use a dry cooler or cooling tower to chill the facility loop without engaging the compressor.
In most of the US, a properly designed water-side economizer can provide partial or full cooling for 4,000 to 7,000 hours per year out of 8,760. At the upper end of that range you are running mechanical cooling less than 20 percent of the time. The energy savings are large and the capital cost is moderate — most new builds include economization by default now.
Air-side economizer concerns
Air-side economization is efficient but introduces humidity swings and outside air contamination into the white space. ASHRAE TC 9.9 publishes envelope guidelines that most modern IT equipment can tolerate, but some older equipment is more sensitive. The air-side approach has become less popular in the last decade as water-side economization has matured, particularly in areas with particulate or corrosive air concerns.
Lever 5: Workload and Rack-Level Telemetry
You cannot optimize what you do not measure. Most facilities have facility-level power and temperature monitoring and nothing at the rack or server level. That means the operator cannot tell you which racks are running hot, which are under-utilized, or which workloads are consuming disproportionate power.
Adding rack-level PDU monitoring and inlet/outlet temperature sensors per rack is relatively inexpensive and transformative. It enables density-aware placement (putting new dense racks where cooling headroom exists), hot-spot remediation (catching containment failures or filter-clogged equipment early), and workload rebalancing (moving power-hungry workloads off shared infrastructure during peak hours). Combined with DCIM software, rack-level telemetry typically pays for itself in avoided capacity additions and recovered cooling headroom within a year.
Server-level power capping
The next level is server-level power capping, using the management controller (iDRAC, iLO, IPMI) to enforce power caps during high-demand events. This is mostly useful for high-density HPC and AI racks where a single misbehaving job can cause a local thermal event; for general enterprise workloads it is less critical.
Things That Sound Efficient But Mostly Are Not
A few ideas that get more credit than they deserve:
- LED lighting in the white space. Good idea, trivial PUE impact. It is a rounding error next to cooling efficiency.
- Variable frequency drives on every fan. Worth doing for new installations, but retrofits often do not pay back within the equipment's remaining life.
- Marketing claims of PUE under 1.1. Almost always measured at peak efficiency, not annualized. Useful as a design target, not as an operational number.
- "AI-optimized" cooling control. Some benefit at the margins for facilities with complex thermal dynamics. For a well-designed facility operated by a competent team, the gains are modest. Do not let a vendor sell you ML before you have fixed your containment.
What We Actually Build
A modern data hall we would design today for a customer: warm-water chilled water plant (18 to 22 C supply), water-side economization sized for the local climate, proper hot aisle containment from day one, rear-door heat exchangers on dense racks, modular lithium UPS sized for the real load profile, rack-level PDU and thermal telemetry feeding DCIM, and a discipline around containment maintenance. Target annual PUE in the 1.15 to 1.25 range depending on climate. Achievable, sustainable, and meaningful for customer operating cost.
Three Takeaways
- Raising chilled water temperature is the highest-leverage single change you can make to an existing facility. It unlocks both chiller efficiency and economizer hours.
- Containment discipline is a daily habit, not a project. The gains are real and they are lost within a year if nobody maintains them.
- Rack-level telemetry is the prerequisite for any other optimization. Without it, every improvement is a guess.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation