Every facet of a data center has been talked about to death. The power demands, chips and servers, and water usage usually find themselves high on the billing for conversations across the industry and beyond. Less so, but still as prominent are topics like skilling, diversity, and underlying fiber. But there’s one aspect of data centers you might have overlooked or not even ever thought of altogether: insurance.
Insurance is a serious business, and after a battery fire at a data center in the Korean city of Daejeon caused havoc for government services, wiping some 858TB of data, protection policies and damages are likely to be high on the post-mortem agenda.
An incident like this brings to mind a question few in the industry may have seriously considered: who pays when things go wrong?
Following a recent seminar in London hosted by Lockton, DCD sat down with two of the insurance firm’s partners to examine the evolving risk protocols in the data center space.
Scale and aggregation
At the turn of the year, BCG estimated that demand for data center power will reach 130 GW by 2028, with demand for generative AI services a leading driver of hyperscales building ever-more larger facilities, with regional players and newly emerging neoclouds following suit.
Lockton partners David Hayhow and Rachel Norris revealed that the scale of projects from an insurance perspective has also changed, moving from what would previously be smaller construction projects to multi-billion-dollar campuses.
A result of that growth then means the insurance firm is seeing what insurers describe as aggregation: the concentration of insured risk in a single location or area. In the context of the wonderful world of data centers, that means the clustering of multiple high-value facilities close together – often the same campus or in the same geographic region.
Hayhow outlines the considerations: “When you’ve got a 20 data center campus facility, what’s the aggregation risk to an insurer? Particularly when you overlay in different parts of the world, even more modest natural peril matters. So in Italy, you’ve got things like earthquakes to deal with, to some extent a little bit in Spain and Portugal. Then obviously, when you move to the US, that’s much more exacerbated.”
With AI driving that data center demand, to what extent is the AI element impacting the risk profile for data center insurance?
According to Hayhow, it’s less about the technology and more about what’s inside the facility itself.
“As the builds become more complex, the volume and complexity of the mechanical and electrical systems (M&E) and the value of the M&E inside the building, insurers will start to become more interested,” he says. “The insurance market always lags a little bit behind the growth, because they’re now starting to underwrite the risks that we were all talking about today. Three years from now, they will be insuring the multi-billion dollar campuses that we are planning and building today, and so as a consequence of that, the large claim losses haven’t yet materialized, hopefully they never will.”
At some point though, a large claim will have to emerge, be it from a fire, a natural disaster, or even a potential liability issue. And the debacle in Daejeon is the closest we’ve gotten so far.
While the dust begins to settle on the South Korean incident, it’s set to provide a potential case study in claims response for a major data center outage, at least for that particular market, anyway.
“We don’t have that traction yet in the UK and Europe,” Norris tells DCD. “We’re pushing that horizon scanning piece, making operators aware of what will happen if a big loss happens. And if one of these facilities goes down in its entirety, do we have the labor force to go and repair that site?
“If we look at the FLAP-D locations, we have one here. But they may already be building another data center for another client as you don’t keep these people on retainer. So when you look at business interruption periods and loss of rent income to these assets, there’s so much more that needs to be considered. Because actually the revenue generating from these buildings is so substantial.”
“For us trying to future-proof those conversations and say to developers, as you’re designing, planning, and building these sites, consider how you’re going to treat it in its stabilized position, and consider what the insurance market’s requirements might be, is our guiding thought on this,” Hayhow adds.
The hardware headache
It’s not just the building of new facilities that are changing the risk equation, it’s what’s inside them. Modern data centers house billions of dollars worth of cutting-edge chips that are both extraordinarily expensive and rapidly obsolescent.
With chip makers like Nvidia and AMD now moving to annual release cadences, the hardware inside a data center can now be outdated within months of operators finally installing it.
For insurers accustomed to evaluating static assets like office buildings or retail parks, this creates what is effectively a fundamental mismatch.
“The insurance market is always very good at providing product,” Hayhow explains. “They say, ‘These are the products that we’ve got.’ And the data center industry is saying, ‘but these are the risks that we have.’ There’s always this gap.”
He continues: “We’re trying to close that gap between the traditional insurance products that address some of the needs of conventional real estate or construction versus the nuanced risk of data center investment and delivery, where you have got extremely fast moving technology.”
The challenge is compounded then by emerging technologies that sound alarming when described to underwriters unfamiliar with the sector.
“You talk to underwriters about immersion cooling technology, and that you’re going to put liquid in closer proximity to a billion dollars worth of equipment, that statement could sound quite scary to an insurer,” Hayhow says.
This is where brokers like Lockton find themselves playing an unexpected role as educators, helping data center operators understand supply chain coverage, while also easing underwriter fears about immersion cooling technologies.
Norris describes taking insurers on roadshows through campuses, walking them through facilities to help them understand that the facilities weren’t your traditional real estate assets.
“This sits in between infrastructure and real estate,” she added. “It’s how we make the insurance world understand that as well. So that’s a massive part of our role here, is education to the insurance market.”
Plugging the performance gap
While the team at Lockton is busy helping to bridge the gap between helping underwriters understand sites and operators learn the insurance world, there’s one fact that has, at least until this year, been effectively uninsurable – the performance guarantee itself.
Data centers typically promise customers a service level agreement (SLA) guaranteeing uptime, often expressed as ‘five nines’ or 99.999% availability. Should a facility fail to meet these commitments, customers stop paying or receive service credits.
And yet, traditional property insurance would not cover this revenue loss because, from an insurer’s perspective, nothing has physically broken that would trigger a claim.
“The difference between a traditional landlord-tenant relationship and a data center owner-customer relationship is the service level agreement,” Hayhow says. “When you compare a far more valuable lease and a far more valuable build cost of a data center versus a traditional asset, the big risk gap is the performance guarantee.”
There are ways operators have sought to manage this risk through approaches like establishing captive insurance structures or attempting to transfer the related risk contractually back to equipment vendors. But with facilities growing ever larger, these alternative approaches oftentimes become increasingly inadequate when it comes to managing this risk.
The insurance industry is now beginning to respond with specialized products. Parametric insurance – which pays out based on predetermined triggers rather than traditional loss assessment – is emerging as one solution. Unlike conventional policies that require proving damage and calculating losses, parametric products pay automatically when specific conditions are met.
The team at Lockton employed parametric insurance at the center of its data center SLA-focused policy. Launched back in May, it delivers financial compensation in the event of an SLA breach.
“To the extent that you’re providing your customer service credits under your service level agreement, our insurance policy will step in and indemnify you on a pound-for-pound basis,” Hayhow says.
The partners told DCD that Lockton has seen substantial interest since launch, suggesting the market has been waiting for such coverage.
Following the policy launch, Norris says the firm was looking at other factors not served by insurance policies, such as products around drought.
“It’s working out where our exposures are, because the market is quite comfortable with data centers from a construction and operational space,” she says. “It’s more when we’ve got those cat[astrophe] exposed or particular areas of focus we need to look at, but we haven’t seen a massive shift in that yet, but definitely that’s always on the table when we’re having conversations.”
The looming equipment conundrum
While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn’t adequately preparing.
“The big thing that isn’t getting on people’s radars in a growing way is customer equipment,” Hayhow says “Looking at this through the lens of the data center owner or developer, it’s often very difficult.
“It’s a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don’t have custody over it, you don’t have visibility over it, and it’s highly proprietary. But the value of it is growing.”
Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases.
“Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We’re having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you’re building elsewhere, is quite significant.”
For Norris, the nightmare scenario is simpler, going back to that Korean incident from earlier this year: What happens when an entire facility goes down?
“In the event of a total loss, there could be environmental risks that need to be considered, and actually how the community supports it,” she says. “If an earthquake hits or a big storm comes in, how can these clients support communities that may have been impacted as well?”
Norris was also mindful as another untested question in the form of lease termination provisions.
Many contracts allow customers to terminate if a facility is unavailable for 12 months or more. It’s never been tested in practice, though Microsoft’s infamous lease withdrawals earlier this year certainly caused some heads to turn.
But as Hayhow pointed out, if you lose your hyperscale customer three years into a 15-year lease, the knock-on effects are severe.
“How easily can you replace that customer? How fit for purpose is a new lease?”, he asks. “There are lots of macro matters there that haven’t come to the market yet, but as we scale, it almost sadly seems inevitable that something will happen at some stage.”
The Korean incident then provided a potentially sobering glimpse of what that future might look like for the insurance industry and data center operators alike, hopefully helping both parties to plan for such scenarios now than to figure them out in the aftermath of disaster.
“We have to learn from events that have happened globally,” Norris said. “Nobody wants to be the person you learn from. But what we have to make sure is that if something happens, we learn from it, we build products around it, and we make sure that the data center space stays as resilient as possible.”
More in Investment / M&A / Financing
More in Construction & Site Selection
Read the orginal article: https://www.datacenterdynamics.com/en/analysis/who-pays-when-a-multi-billion-dollar-data-center-goes-down/