Artificial intelligence (AI) is on a tear. The promise of generative and agentic AI has analysts reaching for superlatives and boardrooms across the globe scrambling to update their business strategies. But can the world’s IT infrastructure meet this surging demand? Edge computing data centers could hold the key to bridging a significant supply gap.
However, only by taking a holistic approach to design covering power, cooling, and cabling can data center builders and operators set themselves up for success. From location to standardization, several key design considerations should be at the forefront.
Why Edge, why now?
The world’s appetite for AI is insatiable. With its capacity for autonomous decision-making, agentic AI in particular promises huge advances in areas as diverse as supply chain optimisation and personalised healthcare. But without sufficient data center capacity, this promise will struggle to become a reality. According to McKinsey, global annual demand for data center capacity could rise by 22 percent from 2023 to 2030 to reach 219GW annually. AI could account for 70 percent of total demand by 2030.
Edge data centers are increasingly sought after in this context as they sit closer to users and devices, offering the low latency and high bandwidth experiences that AI demands. It’s why the market is predicted to reach $317 billion by 2026, a 107 percent increase from 2020. Yet to ensure these facilities can support the demands of the new AI age, operators must consider the three cornerstones of data center design simultaneously – power, cooling, and cabling.
All three are interrelated. Extra effort will need to be spent on overcoming cable congestion, as high-power cables for AI workloads generate significant heat, which can impact connectivity. Data center teams should consider liquid cooling as a more advanced way to address the heat challenge. At the same time, Edge facility bosses must look closely at ways to improve the efficiency of power delivery—to improve sustainability and cost-effectiveness and minimize the impact on local power grids.
Six crucial design considerations
So, where to begin? The following provides a useful place to start:
Understand IT load
IT load—the processing power a data center requires—is reaching record levels and could surge further with AI workloads. To calculate it, facility operators must understand what AI applications and models will be hosted, typical use cases, and data and speed requirements. With a better understanding of IT load, it will be easier to work out power, cooling, and cabling requirements.
Focus on location
It’s time to consider where to locate the Edge data center. The popular European data center locations of Frankfurt, London, Amsterdam, Paris, and Dublin (FLAP-D) may seem like a good idea, given their high connectivity and mature infrastructure. However, many are facing power pressures. That makes countries like Spain and the Nordics, with their plentiful renewable energy sources, an increasingly attractive alternative.
Get familiar with regulations
A data center’s location will determine the relevant regulatory landscape, particularly rules related to sustainability. Germany has strict targets for Power Usage Effectiveness (PUE), for example, as sites commissioned before 1 July 2026 must achieve a PUE of 1.3 by 2030. PUE is calculated by dividing the total facility power by the power needed for the IT equipment, with 1 being a theoretically “perfect” score. There will also be rules to follow regarding the use of water and renewables, as well as energy reuse. Taken together, these regulatory demands will heavily influence design decisions around power, cooling, and cabling.
Standardize where possible
Next, it’s time to develop reference designs for power, cooling, and cabling. Standardization can be a powerful ally here, helping to drive trust, ensure optimized technologies are deployed, and even speed up builds through pre-integration, prefabrication, and flexible designs. Frameworks like those from the Open Compute Project (OCP) can deliver uniformity across designs and ensure they can be repeated across projects, to save time and money.
Reuse resources to maximize efficiency
Rising energy costs, regulatory demands, and corporate sustainability programs mean the direction of travel for Edge data center operators is towards greater use of renewables and waste reuse. Circularity is challenging “make-break-dispose” as the preferred industry approach here. It could mean directing waste heat to local homes, offices, and farms. Or using purified wastewater instead of potable sources in the cooling system, to become “water-positive”. Regardless of the politics, it just makes good business sense.
Overlook cabling at your peril
The importance of high-quality cabling cannot be overstated. Don’t fall into the common trap of skimping on these vital components, as it could ultimately lead to performance issues and expensive data center refreshes. Best practices state that operators follow detailed cabling standards covering everything from containment to testing. Coordination with third parties is also essential early on to minimize bottlenecks and ensure cabling infrastructure aligns with power, cooling, and security. High-quality kit is not cheap, but it’s an investment that will future-proof a data center – supporting it as high bandwidth and low latency demands increase.
Trusted partners can make all the difference
Ultimately, an Edge data center will only be as strong as the partners brought in to help turn designs into reality. They should work as an extension of the operator’s team, bringing their industry expertise to bear at every stage of a project. There may even be a place for AI models themselves in helping to manage the process. But beware of short-term wins and false economies. Investing in the future demands a strategic vision focused on the long haul.
Read the orginal article: https://www.datacenterdynamics.com/en/opinions/unlocking-the-power-of-edge-computing-through-smarter-data-center-design/