Overview of Energy-Saving Strategies in Google’s Modern Data Centers

Overview of Google’s Modern Data Center Energy-Saving Strategies
I. Introduction
Recently, on its official website, Google announced that the global average Power Usage Effectiveness (PUE) of its data centers is 1.12. It further stated that this figure could in fact be as low as 1.06, but out of consideration for total data center energy consumption—including all energy used by cabling, transformers, and other components—it maintains the more “conservative” indicator of 1.12. Compared with the global average data center PUE still struggling around 1.7, Google’s advantage is epoch-making and worth studying, even after allowing for possible calculation discrepancies. Since data centers have become one of the largest energy-consuming industries in the world (accounting for about 1.5%–2% of global energy consumption), reducing their PUE is of critical importance to the global energy sector. In its official document “Efficiency: How others can do it,” Google has also publicized and introduced parts of its own data center energy-saving strategies.
II. Basic Concepts
Data centers support the operation of critical IT equipment such as servers, networking devices, and storage systems. While enabling billions of people worldwide to access the internet, they also generate large amounts of energy consumption. Only by studying specific measures to improve data center energy efficiency can we truly achieve energy savings. For today’s data center managers, adopting highly efficient operational policies is the only way to achieve both “environmental” and “economic” benefits. The first step to improving data center energy efficiency is a careful assessment of PUE.
Put simply, PUE is the ratio of total data center power to the power consumed by primary load equipment (i.e., IT systems), or total facility energy divided by IT equipment energy. Since IT systems at the same product and technology level consume roughly similar amounts of energy, PUE naturally becomes a key indicator for measuring data center energy efficiency. According to research by the Uptime Institute, the global industry average PUE is 1.7, indicating that there is still considerable room for improvement in data center energy efficiency.
Driven by the recent waves of big data and the Internet of Things, data center construction continues around the world. At the same time, amid rising environmental awareness, “data center energy saving” has become a new field of research. Objectively speaking, first, electricity expenses account for a large portion of a data center’s Total Cost of Ownership (TCO), approaching or even exceeding personnel costs; second, data centers often bear a reputation for being “environmentally unfriendly.”
To date, many data centers still lack any efficiency metrics, leaving their energy-saving efforts without clear standards. Although PUE is not without controversy, it remains the primary guideline for assessing data center infrastructure efficiency and offers valuable reference for formulating and implementing green energy-saving strategies.
III. Google’s Data Center Energy-Saving Strategies
The efficiency and carbon emissions of IT equipment such as servers, storage systems, communications technology, and infrastructure (fans, cooling, pumps, power distribution, etc.) are the main factors affecting greenhouse gas emissions from data centers. Focusing on improvements in energy consumption can have a significant impact on a data center’s green energy-saving plan.
In the energy-saving strategies Google has made public, the following components play a significant role in reducing data center energy consumption and optimizing PUE:
1. Conduct Regular Hardware Audits
Most data centers harbor a substantial amount of unnecessary IT equipment. So-called “comatose servers” are machines still plugged into racks but no longer in actual use. They occupy valuable rack space, consume large amounts of energy, and worsen PUE. To gauge how widespread this issue is, a related survey found that about half of respondents do not perform planned inspections or decommission redundant servers. Moreover, in many facilities studied, operators could not accurately monitor all infrastructure and IT loads, suggesting that there is still a long way to go on the path to data center energy efficiency.
Beyond IT equipment checks, non-IT infrastructure must also be inspected regularly—for example, the data center’s uninterruptible power supply (UPS) systems. Unlike traditional line-frequency standalone UPS units, the current trend is toward high-frequency modular UPS solutions. To achieve data center energy savings, two major conditions should be considered when selecting a UPS:
-
Scalable on demand: Modular UPS systems can scale in step with data center expansion by adding power modules as needed. This avoids heavy up-front capital expenditure and prevents wasted floor space during the early stages of deployment, while still allowing the UPS capacity to “seamlessly” match business growth. In addition to adding power modules, the UPS must also support multi-unit parallel operation to cope with data center expansion. Modular UPS thus meets the requirement for seamless capacity increases.
-
High efficiency at partial load: To ensure reliability, typical data centers configure N+X power redundancy or even 2N dual-bus architectures, which keeps average load rates at around 30–40% or even lower. Therefore, in addition to pursuing peak efficiency at full load, attention must also be paid to the efficiency curve across the 20–100% load range, striving to achieve the ideal of “high efficiency at low load.”
2. Regularly Measure PUE
As mentioned above, PUE is the primary industry standard for quantifying energy efficiency, largely because it is simple and practical. However, in the industries observed, very few organizations implement it consistently. Irregular logging cannot provide an accurate understanding of actual energy usage. Industry experts therefore repeatedly recommend routine PUE measurement to track how data center PUE fluctuates with seasonal changes and other factors.
To measure total power in real time and record PUE accurately, sensors must be installed at key measurement points to monitor actual power (both kW and kVA), and energy usage must be recorded over a period of time to enable optimal analysis. Google meticulously records and compares detailed, accurate data for its data centers worldwide, and this is one of the keys to its improved efficiency.
3. Upgrade Hardware
Server efficiency is directly related to PUE and is a key factor in improving it. However, accurately assessing server efficiency requires consideration of several factors. One starting point is CPU utilization: for CPUs with poor energy efficiency, virtualization technology can significantly improve CPU and server efficiency without requiring server replacement. In addition, the load capacity and power consumption of each rack should also be included in efficiency calculations. Deploying blade servers—each rack supporting up to 1,024 CPU cores—is one method to increase rack density while reducing cooling and power requirements at the facility level.
Consolidating physical servers through virtualization improves data center efficiency and should be considered by IT managers during hardware refresh cycles. Studies indicate that server consolidation yields benefits such as:
- Saving up to approximately 4,000 yuan per server per year on average
- Reducing heat output and associated cooling costs
- Freeing up space while increasing computing capacity
It is worth noting that virtualization and blade servers have a downside: they increase heat density and cooling demand. Several solutions are available, such as hot/cold aisle containment or in-row cooling units, which are airflow management techniques designed to address high-density loads in data centers.
4. Improve Cooling Efficiency
Cooling is the second-largest energy consumer after IT load, so installing energy monitoring and measuring mechanisms is critical for understanding cooling’s overall impact on PUE and identifying ways to improve it.
As a leader in data center energy efficiency, Google’s key advantages are also concentrated in the area of cooling efficiency. According to an Android Emotions report, Google’s latest AI products are being used to explore ways to further reduce PUE.
Practices Google uses to enhance cooling efficiency include:
-
Optimize airflow management
Well-designed hot/cold aisle containment prevents hot and cold air from mixing, improving cooling system efficiency. To effectively eliminate hot spots and create an ideal thermal distribution, temperature sensors can be strategically placed and computer simulations used to identify and address hot spots. According to EPA research, effective hot/cold aisle containment can reduce fan energy consumption by 25% and chiller energy consumption by 20%. -
Increase data hall temperature
Google has debunked the myth that data centers must be maintained at around 21°C, confirming that cold aisles can operate at about 27°C. By raising temperatures and shutting off re-heaters and dehumidifiers, substantial energy savings can be achieved. -
Adopt free cooling technologies
Chillers in air-conditioning systems consume large amounts of energy. Depending on climate conditions, free cooling systems can be adopted to bring in cool outside air, use it, and exhaust it back outdoors. Other sources of free cooling include drawing in outside air and water vapor or using large thermal storage systems.
5. Raise Data Center Temperature
For a long time, data center IT staff have been constrained by the traditional belief that data center temperatures must remain low and are reluctant to raise them. However, ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) has revised its recommended operating temperature range for data centers to 18–27°C (64.4–80.6°F).
In addition to temperature, ASHRAE has broadened its recommended humidity range. Thanks to these revisions, data centers can reap further savings in cooling costs and are better positioned to take advantage of free cooling.
Overly conservative temperature policies that keep data centers too cold drive up operating costs and result in worse PUE and higher cooling expenses. Intel research shows that every 1°C increase in data center temperature can yield a 4% reduction in cooling costs. In light of this, several high-temperature-tolerant, energy-saving products have emerged, for example:
-
High-temperature energy-saving servers: “High-temperature” refers to servers that can run stably without mechanical cooling in environments from 5–47°C. Because these servers can tolerate higher data hall temperatures, they reduce cooling energy consumption. Compared with traditional servers, they offer higher temperature tolerance, lower energy consumption, and easier deployment, making them powerful contributors to data center energy savings.
-
High-temperature chilled-water air conditioning: Most of the load in a typical data center is sensible heat, with only a small fraction being latent heat, so dehumidification needs are minimal. As a result, the entering water temperature for precision air conditioners can be raised from the usual 7°C. Under such conditions, chiller cooling capacity increases, the energy efficiency ratio improves, and more opportunities emerge for air-conditioning energy savings.
6. Implement Data Center Infrastructure Management (DCIM)
To help data center operators manage their facilities more efficiently and comprehensively, Data Center Infrastructure Management (DCIM) systems have emerged. DCIM provides a “bird’s-eye view” of the facility, enabling IT managers to respond in real time, plan in advance, manage potential risks, and reduce downtime.
As mentioned earlier, low utilization of individual servers is a common issue in data centers. DCIM helps data center staff identify long-idle servers, reassign workloads to improve efficiency, and accurately measure asset utilization and energy consumption.
IV. Conclusion
Building green, energy-efficient data centers requires a high degree of creativity and freedom from conventional constraints. Many industry cases demonstrate that data center energy saving can deliver substantial tangible benefits, and PUE can serve as a foundational performance metric that is critical for creating sustainable, green data centers.
At present, 100% pure clean energy remains an elusive goal. Still, many companies are approaching it through “hybrid” strategies. For example, they combine renewable energy, on-site generation, and remote grid power. IT giants like Apple have installed 55,000 solar panels; eBay uses fuel cells at its Quicksilver facility in Utah; and Microsoft uses wind and solar energy.
When selecting locations for their data centers, small and medium-sized enterprises can consider working with local utilities to obtain clean or renewable power, or they can site new data centers near utilities that provide clean energy.
At the same time, general enterprises can routinely measure PUE and make good use of management tools such as DCIM to identify and correct IT inefficiencies, reduce carbon emissions, and increase individual server utilization, ultimately optimizing both PUE and ROI.
As a professional provider of integrated smart energy solutions, our company leverages high-quality university resources and draws on advanced energy-saving technologies from across the industry. Through comprehensive measures, we help enterprises significantly reduce energy consumption, improve data center efficiency, and obtain the most competitive energy services.


