Several aspects of 5G are conspiring to potentially overwhelm the fronthaul network. The industry is working on a standard to avoid future bottlenecks.
We all know the 5G promise: ultra-fast speed and higher responsiveness (lower latency), but have you ever really thought about what that means to the network behind the cell tower? Several aspects of 5G are conspiring to potentially overwhelm the fronthaul network—one of the links between your phone and the internet.
Consider your experience using the currently-deployed 4G LTE network. When a lot of people start using their phones at the same time, like at halftime in a packed stadium, or when a plane lands and everyone powers their phones back on, the connection speed seems to grind to a halt and network responsiveness goes to zero. What happens when each of those phones is trying to stream 4K video or process downloads at 100× the current 4G speeds?
Can we handle the speed?
First, the radios in phones and on top of the cell tower will need to handle the higher data transfer rates. Using new 5G frequency bands (including higher-frequency millimeter-wave bands), wider bandwidths, and more complex modulation schemes, the wireless link should be able to support higher 5G transfer rates and lower latencies. Qualcomm, which builds the modem chipset that is at the heart of smartphone and base station radios, recently demonstrated 4Gbps peak transfer rates as a convincing data point.
But new 5G base stations don’t just have to deal with higher transfer rates per user, they also must support more users per cell site. A new 5G technology, massive MIMO (multiple input, multiple output), will leverage large antenna arrays on the cell tower to handle more users at the same time, while improving transfer rates and connection reliability.
A third complicating factor is that massive MIMO requires many more RF data streams than the number of users being serviced. A separate signal is needed for each antenna and more than 100 antennas may be used.
Fourth, the centralization of the radio access network (called C-RAN) is moving some radio processing functions back into the mobile core, which means larger uncompressed radio data needs to transit a larger portion of the network.
It’s a perfect storm: higher transfer rates to each user, more users, many more data streams than users, and larger uncompressed data streams. Each one of these changes would stress the connection from the base station to the internet; combined, they pose a much larger problem.
The heart of the problem
The radio on the cell tower connects to the internet through a fronthaul network and a backhaul network. The fronthaul network transports uncompressed radio data to a processing site called the baseband unit (BBU). The backhaul network transports packet data from the BBU to the internet.
The BBU was previously located at the base of the cell tower, but network virtualization and C-RAN enable operators to move BBUs away from individual cell towers and closer to the center of the network. Service providers like C-RAN due to the infrastructure cost reduction that can be realized with centralized processing and management.
4G LTE networks use a 2003 fronthaul standard called the Common Public Radio Interface (CPRI). CPRI supports data rates up to 10 Gbps, and spans longer than 10 km (a little over 6 miles), which works for 4G LTE networks employing C-RAN architectures. New 5G base stations may require data rates higher than 600 Gbps, which may need new fiber and be financially unacceptable to service providers.
A solution on the horizon?
A new fronthaul standard, CPRI Over Ethernet (eCPRI), was introduced in 2017. The eCPRI standard proposes moving some of the signal processing performed at the BBU to the radio on the cell tower. It’s unclear if this proposal will be adopted, however, because it runs counter to the virtualization of network functionality associated with C-RAN.
Other proposed solutions include different levels of functional split between the radio unit and BBU (which have various tradeoffs in terms of infrastructure cost and ease-of-management), data compression (which may not work due to the lower latency required by 5G), or a reduction in the number of antennas on massive MIMO base stations.
Service providers and network equipment manufacturers are evaluating different solutions to the fronthaul data crunch. System-level simulation is an important tool in this type of evaluation. By parameterizing system models, different architectures can be evaluated and the fronthaul data load can be holistically compared.
In the meantime, as an engineer, it’s an interesting experience to see the trade-offs between throughput and latency performance, connection reliability, network architecture, and infrastructure deployment cost play out in academic journals and news articles.
Will Sitch is director of industry and solutions marketing at Keysight Technologies.