Connecting AI to smart sensors at the edge

Article By : Don Dingee

A well-designed and adequately trained AI system finds insights more quickly while freeing humans from the mundane task of sorting through volumes of data.

Few things get more coverage in computing today than artificial intelligence or AI. Some people think that AI is some magic bullet that will solve problems never solved before, and that’s overstating its capability. But cut through all the hype, and it has practical applications. A well-designed and adequately trained AI system finds insights more quickly while freeing humans from the mundane task of sorting through volumes of data.

There are more benefits of connecting AI to smart sensors at the edge. But where do makers start? As usual, we’ll try to simplify this.

Changing how rules get made

Object recognition is a core task for AI. In some applications—like self-driving vehicles—massive amounts of processing are needed to recognize what’s happening in a complex scene. It is a big challenge to spot objects and their movement in a detailed scene with only seconds to decide. Reduce recognition requirements to a few types of objects with a smaller set of rules, and processing gets easier.

Creating those rules or algorithms is a critical step. In a traditional system, the act of programming is defining a set of rules for handling data. Data comes in, rules are applied, and answers come out. However, rules for recognizing even a simple object and expressing them in a traditional programming language can still get complicated.

Machine learning, found at the core of many AI systems today, reverses the paradigm. In the AI training phase, machine learning is shown a set of answers, and then rules are created. The AI inference phase then runs those rules against new incoming data. If training answers are complete enough and the rules are computationally robust, the accuracy of the inference could be excellent.

Source: Coursera

The “object” of AI inference

In many systems, the object of interest is in a camera’s two-dimensional image. Vision processing can be intensive, especially if the image is high resolution and an object moves with changes in orientation and lighting. For AI inference, many vision systems resort to a GPU configuration, with software distilling a convolutional neural network (CNN) across multiple cores and threads. Others use system-on-chip (SoC) hardware optimized for vision processing.

What makes up an “object” varies with the type of sensor involved. Many objects flow in a one-dimensional data stream for digitization with an A/D converter. These could be an audio snip captured from a microphone, vibration sensed by an accelerometer, or a sample of levels of various gases from an air quality sensor. If there is some pattern to the observation matching a condition of interest, machine learning can detect it.

An AI inference system only needs enough processing to make decisions while keeping up with the incoming data rate—the challenge of real-time determinism. If data rates are low and CNNs simple, many microcontrollers (MCUs) have enough memory and multiply-accumulate (MAC) capability to handle real-time data.

Several things happen by pushing machine learning to the edge, either in a sensor plus an external MCU or in an integrated smart sensor package. Sending less data to a host for AI inference processing saves power and reduces network bandwidth demand. Decisions can be local, improving determinism and privacy since data is kept from the cloud.

Getting started with machine learning on MCUs

Many engineers are working on embedding machine learning on MCUs, a concept called tinyML, optimized routines explicitly designed for the limited resources on an MCU. Many code libraries exist in open source with generally the same workflow. Using a framework like TensorFlow, an AI model is created and trained on a host system with a GPU, converted to a lighter model removing GPU dependencies, and finally into C++ source for an MCU. The TensorFlow website has a good overview of the development process in a writeup titled TensorFlow Lite for Microcontrollers. A bit of Googling will turn up many other resources.

There are also smart sensors turning up with the AI integration already done. For example, Bosch Sensortec has released the BME688 air quality sensor. The sensor supplier has developed its own training studio for creating AI models. Essentially, the part can recognize gas mixtures for conditions like spoiled food in a refrigerator.

Here’s another example of an open-source project where the goal was to recognize a handwritten digit from a scanned image using the MNIST database for training. Often, training data for an application is already available in open-source databases makers can leverage.

There are many possibilities for connecting AI to smart sensors at the edge. The field is rapidly advancing, providing tools that makers can use to develop applications on MCUs without being an expert in CNNs. Opening AI applications to recognize patterns in data acquired via A/D conversion has exciting new possibilities.

This article was originally published on Planet Analog.

After spending a decade in missile guidance systems at General Dynamics, Don Dingee became an evangelist for VMEbus and single-board computer technology at Motorola. He writes about sensors, ADCs/DACs, and signal processing for Planet Analog.


Leave a comment