Building a robust network for millimeter wave communications presents a set of new challenges that will have ramifications for cell design, cell placement, and radio access network architecture.
5G promises 10× faster data rates and a 100× increase in network capacity, a feat only made possible by harnessing the wider bandwidths available above 20 GHz. Vast swaths of bandwidth from 24 GHz to 300 GHz, loosely referred to as millimeter-wave (mmWave), are the latest target for bandwidth-hungry applications. 5G NR Release-15 specifies frequency range 2 (FR2) for mmWave operations from 24.25 – 52.60 GHz, with three 3,000 MHz wide bands at 26.5 GHz, 24.25 GHz, and 37 GHz initially scoped for commercial usage, subject to local regulatory control.
First movers have demonstrated commercially viable fixed 5G mmWave solutions, and 802.11ad (originally known as WiGig) served as a proving ground for light mobile communications between 57 and 66 GHz.
We learned some important and pragmatic lessons from that technology: achieving success with mobile mmWave technology will require innovation and investment at every level of the network, including the RAN, the access node, the antenna, and the handset.
And while innovation will push the limits of what is possible, mmWaves cannot defeat the laws of physics: smaller waves are more susceptible to atmospheric and environmental interference. That’s why really small waves present really big challenges.
Honey, I shrunk the internet
Rick Moranis famously reduced the size of his children in the 1989 classic, Honey I Shrunk the Kids. The syndrome provides a useful mental reference as we consider mmWave behavior. 4G radio waves ranging from 700 MHz to about 2.5 GHz range from roughly 40 to 12 cm in length. In contrast, a 5G NR wave at 28 GHz are just 1 cm long. And that’s the lowest end of the mmWave frequencies; 60 GHz signals are just 5 mm in length.
Like tiny kids, tiny waves “see” the world from a different perspective – everything that used to seem small is now very big. The result of a signal with a 20 cm wavelength hitting a surface that has millimeter-scale irregularities will be specular reflection, but aim signals with millimeter wavelengths off the same surface and it will create diffused bounces. Also, reflections have a much bigger impact on 1 cm signal quality (30 GHz), and there is also the impact from path loss.
Furthermore, small waves are more susceptible to atmospheric interference, especially at 24 GHz and 60 GHz, due to the absorption characteristics of oxygen and water vapor, respectively. They also don’t penetrate well through concrete and low-e glass, commonly used in industrial buildings, skyscrapers, and many European homes. The mmWaves are particularly challenged by organic matter: hands, heads, and even tree leaves present unacceptable losses that must be avoided.
Understanding the challenges
Designers must deal with four new challenges not commonly seen at lower frequencies:
Figure 1 shows the atmospheric losses due to absorption of radio signals by water vapor (24 GHz) and oxygen (60 GHz). These losses, typically below 1 dB for first-generation designs, are mitigated primarily by avoiding the highest loss resonant frequencies. In addition to normal path loss, special consideration must be given to rain fade, which can sap several dB of link budget from the signal, assuming a cell size of 200m (7dB/km at rainfall rates of 1 inch/hr). While a strong signal will not suffer, a weak or marginal signal may become unusable in heavy rain conditions.
Figure 1 Atmospheric absorption of millimeter waves.
Line of sight
As noted earlier, the biggest challenge in millimeter-waves is not the atmosphere – it is obstructions. Where lower frequency waves passed through foliage and the human body, millimeter waves suffer severe impairment. The ITU-CCIR Report 236-2 provides a simple formula to compute foliage loss (L) in dB:
L = 0.2 x f^(0.3)*R^(0.6) dB
where f is the frequency in MHz and R is the depth of foliage traversed. A few large oak trees with 10 m of foliage represent >17dB at 28 GHz – definitely a significant concern. Foliage is actually more challenging than common indoor materials such as non-tinted glass (3.6 dB) and drywall (6.8 dB).
Short channel coherence times
Even before considering 5G NR framing, mmWave channel coherence times are drastically shorter than those at <6 GHz frequencies. For example, the channel coherence time can be approximated as 1/(2*doppler spread) for a given carrier wave frequency. Assuming a vehicle speed of 30 kph (about 19 mph), the channel coherence time Tc at 2 GHz is approximately 9 ms. When the frequency increases to 28 GHz, the coherence time drops to ~643 μs.
Practically speaking, the mobile phone and network must assess the channel state far more frequently. While higher speeds offset the increased frequency of channel state information collection, there is a net increase in overhead, decreasing total potential throughput of the system. This penalty increases with the velocity of the endpoint.
Beamforming is essential
With high path loss, new techniques are required to reach the design distances of 5G in mmWave. Phased array beamforming is one technique that can be used to overcome path loss.
At millimeter wavelengths, individual antenna elements are only 2.5 mm to 5 mm in size. As such, it is possible to design very dense antenna arrays for tower functionality, and directional MIMO arrays can be implemented on the PCB of a phone. With appropriate beam acquisition and beam tracking implementations on both ends of the link, it is possible to achieve design targets of 150 m to 200 m cell sizes, while maintaining enough power budget to overcome transient signal degradation to due mobility events.
Figure 2 A beamforming antenna simulation showing the main lobe and sidelobes
The road to 5G mobility
It will take a combination of innovations to overcome mmWave 5G design challenges. 5G cell design will change. Smaller intercell distances will be essential for coverage. Most designers are targeting 150 m to 200 m for mmWave cellular operations at 28 GHz. This is an order of magnitude denser than the 1 km to 2 km distances of today’s cells.
Due to beamforming, the very concept of cells and cell coverage areas will evolve in 5G. There are many beamforming scenarios where a distant line-of-sight beam may reach the user equipment (UE) with greater signal strength, while a closer non-line-of-sight beam is blocked. A modern CRAN (centralized radio access network) architecture can exploit this concept together with beam steering and beam tracking to provide dynamic signal coverage as a device moves through the coverage area. It is even theoretically possible for the RAN to implement coordinated multi-cell transmission to boost coverage to an individual device.
Most importantly, 5G is not a single frequency standard. The first mobile mmWave 5G devices will have multiple radios supporting both 5G FR1 and 5G FR2 frequency bands. With dynamic channel management and channel aggregation, the devices can leverage all the bandwidth available across multiple frequencies, while support for lower frequency bands will provide coverage in challenging mmWave environments. Most initial implementations will also be able to fall back to 4G. Fallback will be a primary concern in the early 5G days, before networks are upgraded to full 5G capabilities, regardless of mmWave support.
In the long run, physical infrastructure will play a major role in extending the 5G footprint. Light poles, traffic lights, and urban building mounts are all top contenders for the highly dense, line-of-sight coverage required to fully realize the potential of 5G. While challenging to design and build, we have the technology – and demand and scale deployment will bring it affordably to the world, one tiny wave at a time.
Joel Conover is senior director of Industry Solutions & Digital Marketing at Keysight Technologies.