The evolution of cellular air conditioning

Article By : William Sitch

If you can't stand the heat, get out of the cabinet. Cellular network topology is determined in part by heat considerations and in a weird sort of way, that's a good thing.

Mobile networks have an interesting cost driver lurking behind the scenes: air-conditioning at remote sites. From the introduction of cellular in the early 1980s through the deployment of 3G service in the 2000s, antennas were mounted on towers or buildings and the signals were transmitted through lossy metal coaxial cables to a small weatherproof cabinet. The cabinet was located near the base of the tower containing power-hungry radio equipment and amplifiers. The equipment consumed a lot of power and generated a lot of heat. The cabinets needed air conditioning to avoid equipment damage, and cooling costs could be more than 20- to 30 percent of the annual cost of operating the tower.

The 3G standard brought many technology advances, some of which alleviated the air conditioning expense. An architectural change was responsible for the largest shift in energy usage and heat production; 3G radio equipment was split into two parts.

Baseband processing (which converts a digital bitstream from the network into a baseband radio signal) was separated from up-conversion and amplification (where the baseband radio signal becomes a higher-power RF radio signal). Up-conversion and amplification components were packaged into a remote radio head (RRH) and mounted on the cell tower near the antenna. The proximity to the antenna meant that far less power was required to overcome cable losses, and thus the amplifiers no longer had to be actively cooled.

Low-loss fiber connectivity to the remote radio head allowed distances of up to 6 miles between the radio head and baseband unit (BBU). This enabled massive consolidation, moving the bulk of baseband processing into a regional office, often dubbed a “baseband hotel” because it housed multiple BBUs. The co-location ushered in a whole host of additional optimizations including lower-latency coordination between cell sites, reduced intra-site interference, more reliable user handoffs, and improved coverage via coordinated multi-point (CoMP) transmissions.

History repeats itself
Many technology transitions are cyclic. Consider the shift from centralized mainframe computing to independent PCs with local file storage. This trend has gone full circle, leading to the recent mass centralization of compute resources in public clouds and many mourning the demise of the PC.

I expect a similar cycle in cellular, as network function virtualization (NFV) and software defined networking (SDN) dramatically change the way networks are built. 5G operators can leapfrog some of the tribulations of this cycle by learning from the last decade of public cloud evolution.

Amazon, Microsoft, and Google centralized massive amounts of compute and networking into mega data centers, but customers quickly found that application response time suffered. Cloud providers adapted with a hybrid architecture that pushes latency-sensitive operations to the edge of the network, while keeping many non-latency sensitive functions in the core.

5G operators will need a similar strategy to meet latency requirements for real-time services such as augmented reality and self-driving automobiles.

Evolution of the baseband hotel
In a 5G network, NFV separates software from hardware. Services that once ran on proprietary hardware, like routing, load balancing, firewalls, video caching, and transcoding can be deployed on standard servers, and these workloads can be placed anywhere in the network.

5G also introduces network slicing functionality to maximize the utility of these capabilities. In network slicing, an operator can provision specific sub-interfaces at the air interface level, and map these to specific network function chains. This allows providers to deliver highly differentiated services, similar to the way a local area network can offer quality of service for different traffic flows.


There are multiple deployment scenarios for mobile edge computing (MEC) servers. Those depicted here are among the first supported by ISG MEC, the body managing MEC standards. Source: ETSI

Network slicing takes this to the next level by pairing network functions to the service. An example of this in action could be a network slice configured for low-latency, low bandwidth IoT, a slice configured for secure private cellular networking, and a slice supporting standard 5G consumer access, all delivered via the same cellular infrastructure and over the same spectrum.

One of the most interesting new deployment models that extends from network slicing is multi-access edge computing. Mobile edge computing (MEC) is an architectural approach where the 5G carrier moves specific services much closer to the edge of the network, similar to the way Amazon provides Lambda processing at the edge of its cloud.

The result is much lower latency, and helps meet requirements for next-gen applications, such as the 15ms motion-to-photon response target needed to minimize user discomfort in augmented reality and virtual reality applications. MEC can also reduce core loading by caching data such as video at the edge.

The interesting implication of this is that MEC has the potential to turn baseband hotels back into mini data centers. In dense urban areas these may grow into sizable data centers and be co-located with Amazon, Microsoft, Google, and other enterprise data centers undergoing their own migration to the edge of the internet.

While Gartner believes the edge will eat the cloud, I predict carriers will take advantage of the expansion of the cloud edge, using context-aware 5G equipment to vector urgent low-latency traffic to the edge while centrally processing bulk traffic.

This network configuration is referred to as cloud radio access network. C-RAN offers service providers considerable cost savings and management benefits, which suggests to me that the MEC architecture will be used selectively to reduce network traffic and for premium applications that require low-latencies.

Does all this mean that air conditioning is coming back to the cell site equipment cabinet? Probably not. Cooling technologies have made their own improvements. But distributed applications mean that more processing will occur at the edge, and that translates to greater OPEX costs associated with cooling, somewhere close to the edge of the network.

Fortunately, there are lots of opportunities to offset those costs – from creating premium low latency services, to creating blended offers that combine prime edge real-estate and dedicated network slices to deliver unique services. Of course, service providers will need to architect and validate these new hybrid network topologies in the lab (before deployment) and in the field – to predict and later validate expected network behavior. Network function testing and network traffic visibility tools will be an integral part of this validation. But all this just might be worth firing up a few more air conditioners.

Will Sitch is director of industry and solutions marketing at Keysight Technologies.

Related articles:

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment