Audio design has evolved over the years amid semiconductor technology advancements and new uses cases, as shown in this profile story.
There used to be three separate components to design making up an audio system: players, amps and speakers. Through integration, advanced amplifiers, audio transport, digital signal processors (DSPs), wireless technology and several recent use cases, the audio landscape has evolved at a breakneck speed.
Here is a sneak peek at some of the more recent audio technology advances and rapidly developing use cases.
Recent technology advances
DSPs convert analog audio for use in a digital environment. Today’s DSPs are far superior to those of even the recent past given major advances in resolution and sampling rate of audio signal when converted from analog to digital. Resolution improvement yields a detailed and realistic sound and greater accuracy in audio processing. Acoustic echo cancellation (AEC) also benefits from higher resolution as it eliminates the effects of the sound system interacting with room acoustics. It results in more effective systems that can be utilized in less than perfect environments.
Gone is the big heavy box that was an amplifier; the Class-D audio amplifier is a switching or a pulse width modulation (PWM) amplifier. An audio signal modulates a PWM carrier signal driving the output devices, and a low pass filter removes the high PWM carrier frequency. The new amplifier designs are more energy efficient, operate longer, and have replaced huge and heavy transformers with small but efficient chips. As a result, there is an increase in consumer spending on audio equipment due to enhanced audio quality.
Figure 1 PWM facilitates better audio signal-to-noise ratio (SNR) at carrier frequencies. Source: Analog Devices Inc.
While not new, wireless microphones are no longer susceptible to interference from two-way radios or a wireless network. Currently, a microphone is paired with a receiver like a Bluetooth device, and it’s monitored for the best available channel in that frequency. Users are not aware of the monitoring. There’s no static, no dropouts, no hassle, and it offers embedded security that was not available before.
Digital audio can be carried over a network using audio over Ethernet, audio over IP and over digital audio interfaces such as Dante (for ultra-low latency) from Audinate, AES3 developed by Audio Engineering Society (AES) and European Broadcasting Union (EBU), and Multichannel Audio Digital Interface (MADI). Several interfaces, including HDMI and DisplayPort, carry both digital video and audio.
Rapidly developing use cases
The VR/AR-enabled audio has been a major challenge. Within a virtual environment, it either sounds authentic to our senses, or it doesn’t. Within the AR/VR experience, users have often been coming up short. Here, dynamic and realistic audio works by introducing components that simulate an immersive sound environment while performing noise suppression and human voice extraction.
Immersive soundscapes leverage several artificial intelligence (AI)-based technologies to fix background noise. It’s already in use in games, social media, and workplace communications. While noise cancellation headphones regulate soundscape on the receiving end, AI solutions eliminate distracting noise on the audio source. Removing negative effects on both ends deliver robust sound experience.
Immersive audio and real-time spatial audio are changing how we experience sound. Apple’s Airpods Pro and Max headphones supporting spatial audio is made possible with High Fidelity’s real-time Spatial Audio API that lets developers build Clubhouse-type apps in just a couple of hours. As a result, consumption of sound is moving beyond enjoying music, social audio, and podcasts.
Spatial audio allows listeners to experience immersive audio and hear sounds coming from different locations, matching how our brain already processes sound.
Smart speakers have raised the bar for audio quality standards in the home. From coffeemakers to digital thermostats to real-time discussions via video doorbells, we’re seeing a serious uptick in audio use in the home outside of enjoying music.
The issue is that most of these use cases involve a small form factor so that the powerful audio is integrated without large speakers or batteries. Take the example of the use of Class-D amplifiers with integrated speakers and audio processing to meet the challenge of constrained spaces.
Bluetooth Low Energy (BLE) and mobile processors make smart home products possible alongside more accessible audio interfaces. Next, smart amps eliminate the previous trade-offs between loudness, size, and power efficiency.
AI algorithms can be included into audio designs to continuously improve over-the-air (OTA) updates and enhance the driving experience. AI creates a vastly different audio design experience, so while audio is gaining traction in automotive applications, modern processors are incorporating more memory and diverse functions.
Together, processors and AI enable real-time responsiveness. For example, DSP Concepts and LG Electronics recently announced a collaboration to bring AI-powered sound to vehicles. Here, LG is adopting DSP Concepts’ Audio Weaver platform to use AI sound to enhance the in-cabin experience. AI sound is machine learning (ML)-powered and expands stereo or surround sound content into spatial audio domain.
Figure 2 AI sound has significantly enhanced in-cabin experience in modern vehicles. Source: DSP Concepts
Automotive audio applications also include audible alerts, external alerts, voice notes, immersive sound, and noise minimization.
Next challenge: Audio latency
While we still need a microphone, an amplifier, and a loudspeaker of sorts, it looks different today than in the past, and it sounds much better. Reductions in size, weight and cost are combining with new system designs that take minutes to set up.
Audio latency challenges are still on the proverbial drawing board, but it’s clear that the very recent past has been rich with major advances, which will continue. Real-time networks that deliver a broad range of coverage and AI algorithms to monitor the networks in real time will continue to chip away at latency until the challenge is solved.
This article was originally published on Planet Analog.
Carolyn Mathas has 15 years of journalism experience, the last seven of which have been spent on the EE Times Group’s DesignLines, including PLDesignLine, Network Systems DesignLine, Mobile Handset DesignLine, and her current sites, Industrial Control DesignLine and CommsDesign. Prior to joining UBM in 2005, she was a senior editor at Lightwave Magazine and a correspondent for CleanRooms Magazine. Mathas has a BS in marketing from University of Phoenix and an MBA from New York Institute of Technology. She lives in the Sierra foothills and claims that the pine forest, snow, mountain air, bears, and power outages balance her deadline-packed high-tech career.