Audio standards behind auto, smart-home designs

Article By : Carolyn Mathas

Matter enables smart-home devices to "talk" to each other locally without using the cloud.

Many would say that audio technology advances in smart homes and automotive are the result of a culmination of diverse technologies and a growing demand based on the “wow” factor. That’s true, but one enabler is also pushing the envelope: standards.

Today’s smart devices—Alexa, Siri, and Google Assistant—automate, answer, and protect. They’ll talk to you, or at least, respond to your voice. What’s been missing, however, is their ability to talk to each other. Now those adhering to the new Matter standard can.

What’s Matter

So, what is Matter? It’s a cross-compatibility standard that enables smart-home devices to “talk” to each other locally without using the cloud. With this new smart-home interoperability and connectivity standard, unveiled in 2022, it may just take a quick firmware upgrade to devices or hubs if a “smart” device already is installed. It’s not a platform, but an IP-based technology that enables devices to communicate the same way we use a variety of devices to connect to the Internet.

It you’re looking to finally put more smarts into your home, Matter certification should be seriously considered before purchasing anything. Finally, if you’re designing these devices, what are you waiting for? If the hype at CES 2023 was any indicator, you better get a plan.

At CES, the standard was a key highlight in keynotes and at exhibitor booths, and you couldn’t miss it at Apple, Google, Samsung, and Amazon displays.

Matter works over Bluetooth, Thread, Wi-Fi, and Ethernet. Proprietary protocols will likely use dedicated bridges to access Matter. Currently, devices that support Matter will work with Amazon’s Alexa, Apple Home, Google Home, and Samsung SmartThings; the voice assists from these four products will be able to control all smart home platforms supporting Matter. Applications include thermostats, blinds, lightbulbs, switches, door locks, door and window sensors, skylights, smoke alarms, lighting products, refrigerators, TVs, smart plugs and, of course, controllers and hubs.

Most Matter-certified products and supporting data will be rolled out this year, with products likely to be shelf-ready by third quarter of 2023. Here, it’s important to note that until products roll out this year and there is sufficient adoption, Matter may take some time. Finally, not all companies will choose to update their legacy products to address the Matter standard. So, while Matter addresses compatibility today and in the future, backwards compatibility may not be all that smart.

The most recent Matter-certified product announced is Apple’s second-generation HomePod that claims incredible audio quality, Siri capabilities, and security.

Audio standards behind auto, smart-home designs

Figure 1 HomePod is packed with Apple innovations, Siri intelligence, and smart-home capabilities. Source: Apple

HomePod features advanced computational audio for groundbreaking listening and support for immersive Spatial Audio tracks. Plus, it manages everyday tasks, and controls the smart home while allowing users to create smart-home automations using Siri. Sound Recognition listens for smoke and carbon monoxide alarms and closes the blinds or turns on the fan automatically when a certain temperature is reached in a room.

On automotive front

The MPEG-H Audio standard was behind many hyped technologies displayed at CES this year. Specified as ISO/IEC 23008-3, MPEG-H Part 3, it’s an open audio coding standard developed by the ISO/IEC Moving Picture Experts Group (MPEG). The standard supports coding audio as audio channels, audio objects, or ambisonics, or any combination. Its claim to fame is that it delivers greater flexibility and audio control. Signaled metadata carries all information needed to enable interactivity. However, it’s the move from audio channels to object-based systems that is recent. The open ISO standard supports immersive sound and enables users to adjust the audio to meet personal preferences.

Audio standards behind auto, smart-home designs

Figure 2 Electric vehicle (EV) drivers will be able to ask Alexa to help them find, navigate, and pay at public charging spots through a simple voice request such as, “Alexa, find an EV charging station near me.” Source: Business Wire

There were many design examples on the CES floor, including Amazon’s partnership with Josh.ai, Sony’s 360 Reality Audio format, and Panasonic’s EV Audio. What are drivers gaining? An upgrade to Panasonic’s SkipGen eCockpit infotainment system enables users to use voice assistants from both Apple CarPlay and Alexa. An example of what this will soon enable is seen with Amazon and EVGo’s announcement that supports Alexa-enabled search to find a charging station and pay for charging at certain stations in the United States. MPEG-H Audio also enables advanced warning sounds with spatially rendered signaling. An advantage for manufacturers is that just one warning sound needs to be produced across all their car models.

Typically, automotive audio follows a channel-based approach whereby spatial information of the audio scene is mixed and saved in channels for a dedicated loudspeaker setup. The problem is that the same loudspeaker arrangement must always be used for playback and production. When immersive auto arrived on the scene, this became an impossible situation, so object-based audio (OBA) is now overcoming the challenge.

MPEG-H Audio as an object-based standard was spearheaded by the Fraunhofer Institute for Integrated Circuits IIS. Object-based audio involves the storage, transmission, and reproduction of audio content. Audio objects are a virtual sound source that are interactively placed in space. The audio signal plus its corresponding metadata is called an audio scene. These scenes are independent of the loudspeaker setup and are only produced once and, unlike channel-based production, they are stored.

An object is defined by an audio signal and corresponding metadata, and a metadata stream includes properties of all audio objects, including position, type, or gain. The integration of object-based audio into today’s car loudspeaker system is, in comparison, simple. Parallelization increases implementation speed. Each unit is configured using the same data and the results of all units are mixed into final loudspeaker signals.

No matter the application—automotive, smart homes, or others—there will be a flurry of new audio products on the scene within a relatively short period of time.

 

This article was originally published on Planet Analog.

Carolyn Mathas has 15 years of journalism experience, the last seven of which have been spent on the EE Times Group’s DesignLines, including PLDesignLine, Network Systems DesignLine, Mobile Handset DesignLine, and her current sites, Industrial Control DesignLine and CommsDesign. Prior to joining UBM in 2005, she was a senior editor at Lightwave Magazine and a correspondent for CleanRooms Magazine. Mathas has a BS in marketing from University of Phoenix and an MBA from New York Institute of Technology. She lives in the Sierra foothills and claims that the pine forest, snow, mountain air, bears, and power outages balance her deadline-packed high-tech career.

 

Leave a comment