A multi-tier approach to machine learning at the edge

Article By : Richard Quinnell

A multi-tier approach to machine learning at the edge can help streamline both development and deployment for the AIoT.

Implementing artificial intelligence (AI) at the edge requires a tradeoff between cost and performance in processing and considerable development effort. Throwing in machine learning to make the edge device more adaptive compounds the problem. A multi-tier machine learning approach at the edge can help streamline both development and deployment for the artificial intelligence of things (AIoT).

There are many challenges to implementing AI at the edge, not the least of which is the need to balance the hardware’s cost against the compute performance it offers. One tradeoff often considered in this balancing act is to include little or no machine learning (ML) in the edge device, counting on vendor-developed algorithms to handle the AI task.

Experience is showing, however, that AI’s field performance can vary considerably due to local variations in the environment. To develop a robust AI algorithm, then, can require considerable training effort during development to ensure that the results tolerate a wide range of conditions. Even then, the field experience may still show unexpected variations that compromise performance in some installations, requiring additional training development as systems proliferate. These factors considerably complicate both the development effort and cost of broad deployment.

Implementing a degree of ML in the edge device can help mitigate the problem. If the device is capable of learning to handle its specific environmental variations, then the initial AI training effort need not be quite so comprehensive. But implementing a robust ML algorithm at the edge aggravates the hardware cost/performance tradeoff problem. The alternative, having the device report data back to a cloud-based ML effort, introduces communications and bandwidth complications.

Striking the right balance among hardware cost, processing power, communications capacity, AI training development effort, and AI field performance in a design targeting wide deployment is thus currently a major impediment to implementing the AIoT. At the recent Embedded Vision Summit, I ran across an interesting presentation on an approach that may offer a solution. The presentation by Tim Hartley, vice president of products and marketing at machine vision developer SeeChange Technologies, was on automated neural network model training. In it, Hartley described a multi-tier approach to ML for vision AI systems that seems applicable to any sensor-based AI system, as shown in Figure 1.

flow chart for a multi-tier approach to MLFigure 1 Setting multiple thresholds can help identify data that provide opportunities for refining AI algorithms in a second ML program. Source: Rich Quinnell

In a basic AI system, the edge algorithm will typically have a single threshold for making decisions based on the sensor data. Passing the threshold results in one action while not passing the threshold results in another, and the device’s ML program can use the results to further refine its algorithm. This approach is limited, however, by the local ML program’s performance. It also requires careful threshold setting to avoid false triggers while also avoiding failing to trigger when appropriate.

One way to bypass this limitation is to have a second, more powerful ML program examine the data. The trick is to limit the amount of data sent on to this second program. If the installation uses multiple devices, for instance, an edge server can compare results from several devices to see if there are any events that passed threshold on one device but not on another. The server can then process such “events of interest” with an enhanced ML program to further refine the algorithms for the attached devices.

Alternatively, designers can provide the edge device AI with a second threshold to create an “area of interest” for results—events that don’t reach a response threshold but are above a rejection threshold. The devices can then send any events falling within this area of interest to a cloud-based ML program—one more powerful than the device’s program—that can use them to refine the device’s AI algorithm.

Hartley pointed out in his presentation that there is an opportunity for an additional tier in processing data. If the cloud-based ML program also uses two thresholds, then it can identify events of interest to be referred to a human operator for analysis. This approach allows the human’s analysis to help refine the cloud-based ML, while the cloud-based ML provides analysis to refine the device’s ML.

This multi-tier approach holds promise for allowing AIoT systems to leverage ML for refining their performance while balancing the cost of local processing and need for communications bandwidth. It also promises to allow creation of systems in which the AI algorithm refinement becomes an automated process so that widespread deployment does not result in burdensome development efforts. The team can train the AI for initial deployment, then allow the AIoT system to refine the AI on its own.

This article was originally published on EDN.

Rich Quinnell is a retired engineer and writer, and former Editor-in-Chief at EDN.

Related articles:

Leave a comment