Audio chip moves machine learning from digital to analog

Article By : Majeed Ahmad

The machine learning chip processes natively analog data and analyzes it while consuming near-zero power to inference and detect events.

The world’s first analog machine learning chip has arrived, claims Aspinity, the upstart from Pittsburgh, Pennsylvania, commercializing an always-on technology that originated from research carried out at West Virginia University. AML100 processes natively analog data and analyzes the data while consuming near-zero power to inference and detect events.

Figure 1 AML100 features analog and digital interfaces for sensors, system wake-up trigger, and other external components. Source: Aspinity

The chip determines data relevance and turns on the digital core when needed. Here, it’s worth noting that the current systems waste power while analyzing data once it has already been digitized. More than 80% of the digitized data is usually irrelevant. “We believe that the 80% of data that gets digitized and processed in the digital world at the edge is unnecessary and is mostly wasted,” said Tom Doyle, Aspinity’s co-founder and CEO. “What’s wasted here is energy to process the data.”

Aspinity’s new chip performs machine learning and other computations in analog, before digitization, bringing machine learning closer to the source of data. AML100 runs machine learning algorithms to determine whether it’s a glass break or some other sound.

Today, as Doyle pointed out, design engineers have to take all the natural analog sound, digitize it 100%, and look at it in the digital domain to analyze the sound information, only to find out that there is no glass break. “That’s a huge waste of power.”

For something always-on, to detect events like glass break, the current systems monitor all the sound in the room, processing 100% of data. They wait for an event like breaking of a window, which happens extremely rarely, but you can’t miss it when it happens. “It’s a critical dynamic regarding the always-on aspect of the design,” Doyle said.

He added that we blindly process all the data at full bandwidth in a digital architecture that monitors sound for years with no event happening. When you look at the acoustics, where analog-to-digital converters (ADCs) operate at 16-bit resolution, ADC and digital processor consume from 3 mA to 5 mA. And, according to Doyle, that’s the case for the very good ones.

Aspinity inserts its chip in the mix, running a particular model, in this case for a glass-break detection, while turning off the ADC and digital processor. So, they could go to sleep mode and not consume power. That’s how AML100 is able to keep the always-on power to below 100 µA, facilitating an incredible improvement to the system-level power.

“We are moving machine learning from the digital to the analog world, so we can do inferencing locally to make a determination,” Doyle said. “That will facilitate 95% saving in always-on power at the system level.” He also noted that while digital cores boost about their component-level power usage, they forget about adding the hidden ADC power and other cases that grab system power.

How analog machine learning chip works

The AML100 analog machine learning chip incorporates an array of configurable analog blocks (CABs) programmed via software. In other words, the functionality and parameters for each CAB are configured via software. It enables AML100 to create a signal chain for the application at hand and parameterize that and the components in that signal chain.

It’s done by putting the information into the core. The key to doing that is the company’s proprietary analog non-volatile memory, which stores parameters such as neural-network weights and filter bandwidth within each CAB. “That’s how we can build signal chains to do signal processing and use non-volatile memory to embed each of the circuits,” Doyle said.

First and foremost, we parameterize a block where a circuit like a filter is biased to set a center frequency, he added. Second, we use non-volatile memory to deal with the weights of analog neural networks. Third, we use that to deal with some of the variations that we see from chip to chip. “As a result, we are able to build a signal chain using configurable analog core, which can perform the necessary functions like glass-break detection.”

Figure 2 AML100 allows to program a single piece of hardware for different applications, so it can be as efficient as digital. Source: Aspinity

AML 100 supports up to four different analog sensors, and like a sensor hub, it can take inputs from multiple microphones or accelerometers. “It’s easy to integrate into an analog sensor system as we have all the necessary interfaces in place,” said Doyle. Likewise, he added, the analog machine learning chip is easy to integrate into the current digital architectures.

The chip consumes less than 20 µA when always-sensing and is able to reduce analog data by 100x as a result. AML 100, available in a 7 mm x 7 mm 48-pin QFN package, comes along with evaluation kits—EVK1 and EVK2—that support voice detection with pre-roll management, glass break, T3/T4 alarm tone detection, and other acoustic applications.

A new layer in audio-enabled designs

Doyle told Planet Analog that with AML 100, Aspinity has been focused on having the lowest always-on power, which could enable many new products for battery-powered applications. “Power and data go hand in hand in voice-enabled applications,” he said. “So, if you are not processing wasteful information, you are saving a lot of power while allowing to boost the battery life by up to 20 A.”

Figure 3 The new analog layer in voice-first designs could lead to the development of more battery-operated products. Source: Aspinity

At a time when the number of sensors gathering information is growing and the number of connected devices is also on the rise, Aspinity has added a new layer in the voice-enabled designs via its analog machine learning chip. It consumes very little power while performing inference of the audio data; if there is relevant data, it will turn on the next digital stage and communicate to the cloud.

The first analog machine learning chip—which processes natively analog data—aims to address the inefficiency at the edge in audio-centric designs. It will be interesting to see how an analog chip’s inferencing capability finds a place in an otherwise digital-heavy design world. With the specs it claims, it’s hoped that this chip and others like it will encourage more battery-operated voice-first products to reach commercial realization.

This article was originally published on Planet Analog.

Majeed Ahmad, editor-in-chief of EDN and Planet Analog, has covered the electronics design industry for more than two decades.


Leave a comment