5G systems are no longer just the subject of research within major telecoms companies or the topic of conference presentations at industry forums. The reality is that major OEMs will be deploying 5G systems within the next few years, which means that developments are already advancing rapidly.

While there is perhaps a general perception to the contrary, 5G systems are no longer just the subject of research within major telecoms companies or the topic of conference presentations at industry forums. The reality is that major OEMs will be deploying 5G systems within the next few years, which means that developments are already advancing rapidly. For example, Ericsson, working with NTT DOCOMO, aims to launch 5G services in Japan in time to support the 2020 Tokyo Olympics. In preparation, it will demonstrate the capability of 5G at the 2018 Winter Olympics in South Korea, in partnership with ST Telecom.

This impending availability of data services based on 5G network technology will provide online access to more data, even faster. Such immediacy of information will allow many of today’s advanced technology applications, such as autonomous vehicles and virtual reality or augmented reality systems, to dispense with locally stored data and rely instead on the cloud.

Addressing network latency

For this to be effective, network latency needs to be less than 1ms. This not only requires 5G infrastructure installed in the datacentres but also requires datacentres to be located closer to the users and the cellular radio towers that serve them–having a datacentre ~402km away isn’t going to cut it! While data will still need to reside upstream, it will also need to be readily available in more remote locations, at the edge of the network. This change potentially negates the trend for siting datacentres close to power plants that can supply their massive energy demands, or in climates where the cooling requirement and consequent additional energy demand is reduced.

Part of the solution lies in the recent growth in micro-datacentres, whose lower capacity but higher numbers will be sufficient to support this more distributed cloud infrastructure. Even so, utility power supplies may still be constrained and hence it becomes even more vital that all the available power capacity provisioned within the datacentre can be made available and used effectively. Again, this may not be an issue for micro-datacentres on the periphery of major cities where grid power is plentiful, but it will be true in more rural areas where the infrastructure is not well developed. Such systems cannot afford to lock up capacity just because they’ve been scaled to meet peak demand or provide redundancy for mission critical activity.

That is why the other part of the solution, comes from deploying software defined power (SDP) based on a combination of hardware and software that can intelligently and dynamically allocate power throughout the datacentre. However, before we delve into the solution, let’s understand the problem more clearly. Essentially there are three scenarios related to the distribution and management of power in traditional datacentres that result in their capacity requirement being over-specified and underutilized.

The first, in tier 3 or tier 4 datacentres, is the need to provide 100% redundancy for mission-critical tasks. This means that every element in the power supply path, from external utility supply and back-up generator, through theUPS and power distribution unit (PDU), to the server racks and individual servers is duplicated. This is typically the case even if not all servers are dual-corded because they don’t need to run mission-critical tasks. Consequently, if say half the workload of a datacentre is non-critical, then half the redundant power capacity provisioned for those servers is not required, meaning that a staggering one quarter of the datacentre’s total power capacity cannot be accessed because it has been allocated to those servers even though it is theoretically available.

Then there are two instances where power provision is scaled to cope with peak loads. One of these is determined by CPU utilization and the type of task being undertaken, where some tasks are inevitably more intensive than others. For example, Google has shown that servers handling web mail have an average-to-peak power ratio of 89.9% while web search loads have a lower ratio of 72.7%. So, specifying all servers for web mail means servers destined only to be used for web search will have at least a 17% surplus power capacity.

The other utilization case is where the load varies over time. This may be a load pattern that occurs throughout the day but can also result from highly dynamic changes dependent on the task being undertaken. For example, the actual power usage of a server rack might typically be 8-10kW but if demand could peak to 16kW then it will be necessary to provision power capacity for 16kW. As mentioned earlier, SDP provides a solution for the power management of all datacentres whether for traditional cloud computing and storage requirements or the nimbler micro-datacentres that are needed to serve low-latency 5G applications. SDP supports everything from optimizing the voltages found in the distributed power architecture of server racks, through to dynamically managing power sources and implementing peak-shaving. Peak-shaving addresses the problem of dynamic load variations, which can peak well above nominal demand levels. Here the ability to store energy in batteries during low utilization periods allows an instantaneous response to surges in demand, avoiding the need to over-provision supply capacity–this is illustrated in the figure below.

Chart_showing_batteries_storing_energy_cr Fig. 1: A 16-kW rack running with 8-10kW of utility power.

A specific solution, offered by Virtual Power Systems (VPS) in partnership with CUI, is the Intelligent Control of Energy, otherwise known as ICE. This complete power management capability can be deployed in both existing and new datacentre installations and comprises power switching and Li-ion battery storage modules from CUI along with an operating system from VPS. ICE can deliver a reduction of up to a 50% in the total cost of ownership of server power installations by releasing redundant capacity from non-mission critical systems, managing load distribution and maximizing utilization.