There is a place in South East London where you can stand upon the edge of time.
Greenwich is, in many ways, the birthplace of timekeeping. Home to the Prime Meridian and giving its name to Greenwich Mean Time, one cannot explore the world of timekeeping without visiting the grassy hill, upon which sits the Royal Observatory and a vast collection of timekeeping pieces from throughout the centuries.
DCD visited the museum last year, and while most of the clocks on display are from a different era, the real-life implications of time-keeping continue to echo through its high-ceilinged rooms. Time governs our daily lives on a human level, but it can easily be forgotten how it also rules modern-day technology, such as data centers, networks, and the Internet as a whole. Without timekeeping, chaos would ensue.
The importance of accurate timekeeping for the Internet was neatly summed up to DCD by data center operator Telehouse’s senior buildings manager, Paul Sharp. “Data is shipped around the world in bytes and packets, sent over the network, and then, at the far end, put back together,” he explains. “I’ll use the analogy of the book. The Internet without timekeeping would be like removing the page numbers. You wouldn’t know if you were putting the pages back together in the right order.”
The concept of time itself is somewhat hard to pin down. We know it exists, but where is it? Where does it come from? Who decided what a “second” was? Who owns it?
The answer is, of course, complicated. But with time-keeping today a vastly different proposition from when we relied solely on the sun rising and setting each day to know the world had moved on, a brief history is necessary.
A (much simpler) brief history of time
You would be hard-pressed to find a person with a greater grasp on the history of timekeeping than David Rooney. Author of About Time: A History of Civilization in Twelve Clocks, Rooney currently works as a curator at the Science Museum in London, having previously headed the timekeeping collection at the aforementioned Royal Observatory in Greenwich.
For Rooney, the history of timekeeping has many pivotal moments – but three he picks out when talking to DCD are the invention of the pendulum and the balance spring in the 1600s, and centuries later, the arrival of the first atomic clock in 1955.
“In the second half of the 17th century, there were two inventions that transformed clocks and watches from effectively inaccurate guides to the time, and transformed them into scientific instruments. That was the invention of the pendulum in 1656 and the invention of the balance spring in watches in 1675,” Rooney says.
“In both cases, the accuracy was transformed and turned both clocks and watches into scientific instruments, which, in the age of the scientific revolution and the enlightenment, transformed humankind. It was profoundly significant in human history.”
It is hard today to imagine a world where there was no strict concept of “time.” People would have looked ahead and to the past, but with very little quantifiable data to shape their perspective. Even with these devices and a greater degree of accuracy, time remained very different for different people.
Towns and cities would operate in their own time zones, rather than what we have today, where standards cover large swathes of countries. Previously, time would also have been communicated by word of mouth, with Rooney writing in his book about the people who would take the time from Greenwich and sell it to businesses across the capital.
Getting through any significant portion of the history of time would require a thesis-length article, so instead, we shall skip ahead to 1955 and the invention of the atomic clock.
Measuring time
The invention of the atomic clock, a clock “more accurate than the rotating Earth itself,” was, according to Rooney, “a profound idea.” By 1967, humanity had officially adopted atomic time. The atomic clock he mentions was the formative system that inspired the clocks we rely on today for timekeeping.
But while it solved some problems, it created others. With the clock being more accurate than the Earth itself, it often doesn’t match the minuscule fluctuations in the planet’s rotation. This is something that the laboratories around the world that are calculating the time must take into account, finding ways to make our solar time scale and that of the clocks align.
Our time is generated by the work of around 700 “high-end” atomic clocks in around 85 national laboratories around the world. Those labs send regular data to the International Bureau of Weights and Measures (BIPM) in France, which then calculates Universal Coordinated Time (UTC).
“UTC is really a paper timescale. It’s a calculation, and each of the labs gets informed of their offset, so what the gap is to UTC, and then there is the option to either slowly reduce that offset, or maintain it and inform the nation,” explains Dr Leon Lobo, head of the National Timing Centre (NTC) of the National Physical Laboratory (NPL) in the UK – one of the aforementioned labs contributing to the system.
NPL operates a number of “clocks,” which are currently hydrogen maser powered, as well as a cesium fountain – a type of atomic clock that works by using lasers to cool and “toss” cesium atoms upwards, measuring the rise and fall due to gravity – which is used to calculate the span of a second in a highly stable way.
These clocks are so sensitive that even raising them up a meter can impact their operation due to minute changes in their gravitational pull. “We manage the temperature, the humidity, and vibration, and isolated plants manage the airflow. The electrical interferences are basically blocked out,” Dr Lobo says.
While highly accurate, Dr Lobo notes that all clocks “experience drift” at varying rates. The 1955 atomic clock, for example, had a drift of a second over a 300-year period – in other words, after 300 years, the clock would become inaccurate by one second. “Our cesium fountains today have a second-level drift of 158 million years, and the clocks we are developing now – which will be next-generation optical atomic clocks – will be stable for the lifetime of the universe – or around 14 billion years. It’s not that the clock itself will last that long, but the stability is capable of it,” he explains.
One misconception noted by Dr. Lobo is the role of satellites in timekeeping. “The element that is not known is that satellite systems are actually dissemination methods for time, and not sources of it.”
Global navigation satellite systems (GNSS), including GPS, Russia’s GLONASS, China’s BDS, and the EU’s Galileo, are all linked to our concept of time.
These systems take the “time” data from the global laboratories and transmit it to the BIPM in France, but they are also the way that many data centers and related digital infrastructure access time.
The satellites have atomic clocks on board, which are synchronized with those on Earth, and disseminate readings to the world en masse. While our mobile phones will get their “time” from an internal clock and the Internet, GNSS plays a major role in providing the source material.
While GNSS is the typical method used, it has its downfalls. Elena Parsons, strategic business development manager at NPL, explains that relying on GNSS can be risky.
“GNSS signals are incredibly weak, and it’s quite easy to interfere with them – what we would call jamming, or even spoofing, where someone could change the time being delivered.”
The consequences of this would be significant. Without accurate timekeeping, data center networks would fail to function, and any industry reliant on digital infrastructure – which, in 2025, is almost every industry – would take a hit. According to a report by London Economics, the economic loss for the UK due to a GNSS outage for seven days would be an estimated £7.6m ($9.5m) or about £1.4 billion ($1.8bn) in a 24- hour outage.
Because of this, one of NTC’s main goals over the years has been to establish greater reliability and redundancy for the UK’s timing service. It is doing this by diversifying how time is transmitted, using fiber optic cables, communication satellites, terrestrial broadcasts, and radio signalling, as well as GNSS.
A Telehouse data center campus in London has taken advantage of this, and is now home to one of NPL’s service nodes, which delivers “continuous, assured timing signals over dedicated fiber optic cables that are traceable to UTC (NPL) and independent of GNSS” to the data centers and their customers.
NPL’s Parsons notes that the data center sector has been pretty receptive to the Time Service thus far. “One good thing is that the data center sector, in particular, is very familiar with redundancy. They have it in power infrastructure systems, in networks, so talking about it in a timing reference is resonating with them.”
When you arrive at the Telehouse London campus, you are immediately greeted by a large clock delivering time down to the last millisecond, directly sent by NPL. Watching the numbers rapidly ticking by as you sit in the waiting room creates an odd sense of urgency, but also demonstrates how seriously Telehouse takes its relationship with NPL.
Keeping time in the data center
Telehouse’s Sharp explains to DCD how the “Time Service” works. “Essentially, there is a master clock and a backup (or slave) clock,” he explains.
“Those are synchronized to ensure the time is accurate. We then take dual redundant fibers from each and bring them onto the site. “We haven’t got a ‘clock’ as such on site, but we have a repeater that captures that timing essence. We have one in Telehouse South and Telehouse North, and those are then linked across so customers can choose between them or use both to have a resilient, redundant feed.”
Despite the service being available at the data center, it isn’t necessarily adopted by every customer. “We have customers who take GPS or GNSS antennas and put them on the roof,” Sharp says. “For example, the cloud providers often prefer to take a cookie-cutter approach to their operations around the world, and will just replicate it across all sites, using GPS antenna and converting the signal to the UTC time stamp.”
According to Telehouse Europe’s VP of sales, Will Scott, accurate timing is prioritized by a number of its customers – including financial services companies, with high frequency traders needing “100 microseconds of accuracy on a time stamp to UTC, with NPL’s service accurate to one microsecond,” as well as streaming platforms, and live broadcasters.
While Telehouse offers time as a service, it is up to the customers within its data center to use it wisely.
SiTime provides products specifically designed to help keep time accurate through MEMs (microelectromechanical systems) or oscillators.
“We are the heartbeat of electronics. Literally any electrical device that runs today needs timing in there, and because digital electronic signals are all zeros and ones, to make sense of those, you need a baseline reference, which is a clock,” SiTime’s EVP of marketing, Piyush Sevalia, tells DCD.
SiTime’s solutions are based on silicon, rather than the traditional Quartz, which the company says makes them more resilient and stable.
According to Sevalia, synchronization in the data center is important on many levels. He offers the example of having a multi-data center campus all working on a single task – for example, AI training. “If your AI cluster has, say, ten clusters in one data center, or in multiple data centers, they all need to be synchronized with each other when you’re parallel processing the AI training tasks.”
Computer networks are synchronized using the IEEE 1588 standard – also known as the Precision Time Protocol.
While data centers will use this to remain in sync across clusters or even across buildings, Sevalia notes that they will also have “localized time sources so that the time at the data center is very accurate.”
Data centers haven’t always needed localized clocks, however. “Today’s bandwidth requires that pretty much all data centers need highly accurate local clocks,” Sevalia says. “Ten years ago, that was not the case. Maybe a couple of them may have needed it. Maybe they could have gotten by just synchronizing with the universal time that NIST (the US’ National Institute of Standards and Technology) puts out. Maybe they could have gotten by with that.” But as demand for lower latency has grown, data centers have been forced to adopt higher bandwidth, thus demanding a more accurate timekeeping system.
The importance of this was reiterated to DCD by Yiming Lei, a doctoral researcher at Germany’s Max Planck Institute for Informatics.
“In a traditional data center, there are lots of machines connected together and, for example, for the management of workloads or applications, a synchronized clock is needed,” he says. “If there are distributed databases running in a data center, and different machines processing different user requests, those will need to have a timestamp for each request to order the requests.” To return to our previous analogy, the book pages need to be numbered.
“Because requests are handled by different machines, this process needs to be relatively synchronized to have a global order. For example, if you used it for financial purposes, the accuracy of this content is really important because you need to decide who to sell stock to, for example.
“But, as long as it is a distributed application – which most popular applications running in data centers are – a synchronized clock can be used to some extent, but there are different levels of accuracy,” he explains.
At a high level, the traditional way of doing this is by exchanging messages with time stamps.
Lei says that 20 years ago, packages would often be delivered with a timestamp coming from the software or operating system. Today, a timestamping function has been added to the network interface card or hardware, making them far more accurate.
Time stamping and accuracy extend beyond the computing done in a data center, however, to the actual functioning of the facility. Dr. Luke Durcan is the system AI commercial and IP leader at Schneider Electric, a power technology company serving many data centers around the world. In Dr. Durcan’s experience, accurate time keeping has proved to be key to figuring out operational problems in data centers on several occasions.
The company’s meters and UPSs have internal clocks, meaning that when “incidents” arise, they can be traced back to a particular moment. An example offered by Dr. Durcan was a repeating failure of cooling equipment at a client’s data centers.
Every day, at the same time, “30 percent of the units would just stop working,” he says. “We looked at the meters, and we could see a major power quality event was occurring at exactly the same time, so we were able to associate the power events with the cooling units, and because of that, we were able to go to the utility and see where the issue was.
“It turns out, there was another huge data center that was doing load testing at the same time and shedding 20MW at a time, and it was causing huge disturbance to the network.”
While in this case, the time stamping was important to be able to compare to our human perception of time, in other cases, the time itself is less important than ensuring different devices are synchronized.
“Most of the communication on a data center network is on Modbus, which is a very common and fairly straightforward protocol,” Dr. Durcan says. “Depending on when the asset is polled, that is the time it communicates – so, say I poll both a UPS and a meter at the same time, when it’s recorded in the system, it’s synchronized, but it’s not relative to NTP or a clock.
“We use block polling or sequence polling. So, say “zero time” is the poll of the meter, then zero plus one is the UPS. Those two readings will be registered as zero and one, and then any other polls continue consecutively,” he explains. “Most of the data is relative to the infrastructure.” Then, there is ‘subcycle data’ or log data, which is interpreted through log files instead of Modbus.
“What’s important from a clock perspective is making sure that those subcycle events are synchronized, so if an event happens on the meter, the UPS log will correspond from a time scale perspective. That’s where it gets interesting. It’s not unusual to get multiple power quality events occurring at the same time, and then you do need a very high accuracy of synchronization to determine what’s a transient, a surge, or a sag. It’s bad regardless, but there are granularities of bad and good.”
Taking the leap
With so much relying on time and synchronization, the hope is that, provided everything operates accurately and consistently, issues shouldn’t arise.
While “drift” is a known and identified issue, another obstacle can cause a glitch in time’s arrow. Since adopting atomic clocks, one of the ways that scientists have made them match up to our solar day is through the concept of the “leap second.”
“Leap seconds can play havoc with digital systems, because suddenly you are adding an additional second into time, and digital systems have to ensure they implement it properly or everything can fall apart,” Dr. Lobo tells DCD.
The leap second is more or less as it sounds. When the disparity between UTC and the rate the Earth is spinning becomes too disaligned, the International Earth Rotation and Reference Systems Service (IERS) adds a second in – either the last second of December or June. In total, 27 leap seconds have been added since 1972. In 2022, however, at the General Conference on Weights and Measures (CGPM), governments globally voted in favor of ending the leap second by 2035.
The move to be rid of the leap second coincided with the rotation of the Earth speeding up, rather than slowing down, effectively meaning that instead of adding a second, a second may need to be deducted. The most recent leap second – added at the end of 2016 – caused a Cloudflare outage.
The 2012 leap second caused a major Facebook outage as the social media company’s Linux servers became overloaded trying to understand why they were transported back into the past.
Despite some failures occurring, the majority of tech companies find a way to handle the untimely shift.
“Some do it in advance of when we implement it, which is the last second of either June or December, depending on when it’s decided. Google, for instance, ‘smears’ it over every second of that day, for example,” says Dr. Lobo. “But nobody operates in isolation, particularly financial markets. They are global and are always interacting with other organizations in order to trade and transfer data and the like, and if they don’t do it in the same way, you would have massive sync issues.”
Conversations as to how the leap second will be removed are ongoing. One solution is to extend the threshold, says Dr. Lobo, so instead of a leap second, there is a leap minute, for example, that happens every hundred-odd years.
While not approaching time from a technical perspective, DCD asked David Rooney his thoughts on the latest shift. “My view has always been it’s the job of the coders to make it work, because what the leap second does is ensure that the time on the clocks of people around the world, the time in civil life, is connected to the rotation of the Earth and the passage of the sun through the sky, which is how we as humans experience time,” he says. “Even though we’ve invented the atomic clock as humans, we’re still animals, and as animals, we experience time by rotation of the earth and then by the seasons.”
Rooney adds: “The engineers are clever, and they made a system that they could make work, and I believe that engineers are still clever, and they could still make it work, and we could retain the system. The arguments lost, fair enough. It’s not the end of the world.”
For now, while the leap second hasn’t reared its head in nine years, the solution to removing it for good is up for debate.
Optical time and optical data centers
One thing is clear – the next generation of both clocks and data centers are being explored, and both of them share the same name: Optical.
In 2023, Google began working on a project dubbed “Mission Apollo.” The search and cloud giant wanted to replace traditional network switches with optical circuit switches, using light, instead of electrons, to send information, and created its own switches to do just this.
In traditional network topologies, signals jump between optical and electrical states, but many surmise that keeping these signals in an optical state for as long as possible will lead to efficiencies. After all, light travels at, well, the speed of light.
While Google is embracing the technology, known as photonic networking, this is by no means the standard approach, and one key change that comes with it is the need for even greater time and synchronization accuracy.
Max Planck’s Yimeng Lei studied this phenomenon in his paper Nanosecond Precision Time Synchronization for Optical Data Center Networks. Speaking to DCD about his research, Lei explains that an optical data center network, with less transferring between electrons and light, can “scale to the end of Moore’s law, and it’s much more energy efficient than traditional electrical switches.”
He continues: “The reason it requires time synchronization is related to how this network works. It doesn’t check the packet; it just forwards based on the current connection. Optical switches change their internal topology over time and forward accordingly, which means the end point of this network needs to know the current configuration of optical switches, and thus they need to have a synchronized clock to what configuration it has at a particular point in time.
“It needs to be highly synchronized. The current, more advanced, designs of optical data center networks tend to reconfigure this optical switch to the nanosecond level, and that’s as far as we can go right now.”
Simply put, better and more accurate clocks are needed, and Lei’s research established that they could reach a 28-nanosecond sync accuracy with their implementation of Nanosecond Optical Synchronization.
SiTime, too, has seen some interest from data centers looking to use optical networking solutions.
Sevalia tells DCD that the company currently has “a bunch of customers who are using us in optical modules” for the reason that their technology has a “rejection of power supply noise and minimizing its impact on the timing signal that is much better than the quartz devices.”
The world of photonic or optical networks is garnering increasing attention, with the likes of Oriole Networks, Lightmatter, and Xscape Photonics heavily investing in developing and expanding the technology. But for now, it remains somewhat of a niche approach.
With the technology needing more accurate clocks, the pursuit to push beyond cesium devices is also ongoing.
SiTime’s Sevalia tells DCD that timekeeping devices come in various forms that can be ranked by accuracy and stability: rubidium, cesium (the current timekeeping solution used), and optical – the clocks that NPL is currently developing, which use neutral strontium atoms held in an optical lattice potential.
It is these clocks that Dr. Lobo says have the potential to remain stable for the lifetime of the universe, and are being developed for various applications: quantum sensing, synchronization of high-speed networks, space science, and tests of fundamental physical theories.
It is when talking about this next generation of clocks that a gleam of unrestrained excitement emerges from Dr. Lobo. “The reason why we are developing the next generation of clocks beyond cesium is, firstly, because we can, but also because they are demonstrating stability better than what is the primary standard at the moment,” Dr. Lobo says.
“We are also looking at redefining the second within the next decade, moving from a cesium hyperfine transition to an optical transition in strontium or ytterbium or a combination of different elements, and it’s absolutely crucial because from the point of view of all our use cases, most are already in the microsecond or nanosecond range, and there are many that shifting beyond that as well.
“We will always need more stringent time, more precision, and breaking down events into shorter and shorter fractions of a second. In order to be able to do that, the highest-end clocks and the national metrology institutes need to be several orders of magnitude better.”
On a human level, introducing optical clocks and defining a new “second” will not have a material change on our everyday lives. For most people, we can choose to simply ignore it.
But this is something that Dr. Lobo calls out for change – the ignorance of time.
“Unfortunately, it is very much that invisible utility that supports everything, and no one really stops to consider where they get their time from, or what they would do if they lost it,” he says.
Read the orginal article: https://www.datacenterdynamics.com/en/analysis/the-profundity-and-paradox-of-time/







