Many, if not all, highly regulated sectors comply with reference standards that are cited by legislators but developed, updated, and peer reviewed by teams of industry experts. This means that these standards continue to be market-appropriate without the need for new legislation.
However, we need to distinguish standards that relate to processes and procedures from metrics that measure performance against specific criteria.
Are we relying on the right metrics?
In the data center sector, process-related security standards like SOC2 are widely adopted, often as customer prerequisites. The sector also boasts an impressive array of mature performance metrics, most of which have gone through a detailed and thorough standardization process to ensure a level playing field and enable meaningful comparisons.
Take PUE for instance: a metric that measures the energy overhead imposed by the data center infrastructure, and WUE, which measures water consumption as a function of activity. Ensuring that PUE and WUE are measured in a consistent and robust way benefits everybody and provides clarity.
However, there is an ongoing dialogue about the way that these metrics accommodate new developments in technology, especially those relating to cooling. Some liquid cooling solutions, for instance, blur the boundary between the infrastructure and IT and may affect PUE without necessarily indicating a change in cooling overhead. Accommodating heat reuse systems into PUE in a robust and fair way is also proving very tricky.
Changing with the times
While innovation presents challenges to existing metrics which must evolve to keep pace, a more worrying problem is emerging: the tendency for policymakers to establish legislative instruments that reference industry metrics and cite specific values or thresholds that must be met without necessarily understanding the purpose of the metric, or how it is calculated.
Picking a number in this way can lead to unintended consequences. Firstly, it doesn’t allow for technological developments that may change what is acceptable in the market, and secondly it can force us to cool our facilities in a way that is suboptimal from a sustainability perspective.
For instance, in Germany, the EnEfG sets a requirement for PUE at 1.2 for new data centers. Ultra-low PUEs can usually be achieved only through free cooling, which is location dependent, or by the adoption of evaporative systems, which can be water intensive.
Evaporative cooling is a very energy efficient approach but is unlikely to be the most sustainable solution in water stressed locations. So, a minimum performance standard such as the one implemented in Germany may force operators down a more water-intensive route to cooling than is location-appropriate.
We saw a parallel problem earlier this year during the pre-consultation process on a rating scheme and minimum performance standards (MPS) for data centers that is part of the implementation of an Energy Efficiency Directive (EED).
The consultants appointed by the European Commission presented their rating scheme proposals in the form of an A-G label of the kind usually associated with domestic appliances. The objective is to apply this to all facilities obliged under EED, grading each against various sustainability criteria.
The proposed label includes a WUE range from 0.05L/KWh (best) to 0.8L/KWh (worst). It’s worth thinking about the implications of this for cooling systems. A dry system is likely to outperform the “best” rating, and a “wet” system will not meet the “worst” category of 0.8.
The range therefore fails to differentiate an efficient dry system from an inefficient one, and an efficient wet system from an inefficient one. Additionally, with one stroke it drives data centers to avoid using water, even where it is plentiful and would, on balance, facilitate the most sustainable solution.
The consultancy team also proposed minimum performance standards (MPS) relating to PUE, WUE, and renewable energy. While we support the choice of metrics, the proposed values are much more problematic.
WUE is a good example where the proposed MPS is 0.4L/KWh of IT load. Those familiar with data center cooling systems will be rolling their eyes, because an MPS of 0.4 essentially regulates out wet cooling, thus sweeping away a whole range of highly energy efficient technologies and the potential for future refinement within those approaches.
AnEED that deems some of the most energy-efficient data centers on the market unfit for purpose is itself unfit for purpose as a policy instrument.
Finally, it is clear from the wording on the MPS that the team of consultants have misunderstood WUE categories – another forehead-slapping example of would-be policymakers applying standards and metrics that they simply don’t understand.
Making the right distinctions
At this point it is worth taking a step back and thinking about what these instruments are aiming to achieve. The Commission is faced with a rapidly growing, energy intensive sector that it knows almost nothing about.
So, we can appreciate the urgency with which industry data on resource consumption needs to be collated to inform a legislative approach that will align with the commitments of the Green Deal, and restore public confidence in terms of the sector’s power and water productivity.
The data collection stage of EED has already been implemented and, in theory, the rating scheme and MPS should form the next steps. Unfortunately, the data collection rollout was rushed and the data is patchy both in terms of completeness and quality.
This means that the standardization process is currently based on an incomplete picture of sector performance. A trial phase whilst the data collection process matures would be a good compromise.
However, this does not resolve one final issue: that the three policy instruments (data collection, ratings, and MPS) have separate functions that should not be confused with each other. Nor should they be confused with other policy instruments not yet developed.
The policy objective of the data collection requirement is to improve transparency and accountability, and of course to inform the rating scheme and MPS. The objective of the rating scheme is to drive improvement at facility and operator level, because nobody wants to be sitting in the red zone of shame and operators will compete to occupy those best-in-class shades of green.
Labels of this type have been very successful in driving improvements in performance in other sectors. In terms of policy outcomes, the MPS should do what it says on the tin and remove the worst performing facilities from the market – a data center equivalent of the Tour de France Broomwagon.
However, the MPS is being positioned as the gateway to participation in providing future capacity for Europe’s cloud and AI expansion plans. This confuses what should be aspirational, best in class, pre-qualification criteria (PQC) with a means to remove things that are unfit for purpose from the market. MPS and PQCs have separate objectives and need different sets of standards, not a one size fits all approach that doesn’t work for anybody.
So, all these things considered, my view on whether standardization threatens innovation is that it depends.
It depends on the maturity of the technologies that the standards are applied to, on the degree of technical, sector-specific understanding held by those implementing standards within a given market, and on the quality of dialogue maintained between the industry and those developing policy instruments for it.
Read the orginal article: https://www.datacenterdynamics.com/en/opinions/data-center-cooling-does-standardization-threaten-innovation/