AI and on-edge processing for smart sensors

Article By : Roberto Mura

As the data produced by smart sensors increases, on-edge processing can be used to improve the intelligence and capabilities of many applications

In 2018, the smart sensor market was valued at $30.82 billion and is expected to reach $85.93 billion by the end of 2024, registering an increase of 18.82% per year during the forecast period 2019-2024. With the growing roles that IoT applications, vehicle automation, and smart wearable systems play in the world’s economies and infrastructures, MEMS sensors are now perceived as fundamental components for various applications, responding to the growing demand for performance and efficiency.

Connected MEMS devices have found applications in nearly every part of our modern economy, including in our cities, vehicles, homes, and a wide range of other “intelligent” systems. As the volume of data produced by smart sensors rapidly increases, it threatens to outstrip the capabilities of cloud-based artificial intelligence (AI) applications, as well as the networks that connect the edge and the cloud. In this article, we will explore how on-edge processing resources can be used to offload cloud applications by filtering, analyzing, and providing insights that improve the intelligence and capabilities of many applications.

An AI approach to machine learning

What is machine learning (ML) and how can be used to solve complex problems in various applications? It is a subfield of AI that deals with systems’ ability to learn and improve their decision-making based on experience instead of relying on explicit programming. The learning process starts with some combination of observations, data, or instructions. The data is processed to derive patterns that can be used to improve the future decisions based on the observations and feedback received by the system. This technique allows systems to learn automatically with minimum human intervention.

The concept is extremely powerful: ML allows systems to program themselves and improve their performance through a process of continuous refinement. In conventional systems, data and programs are simply run together to produce the desired output, leaving any problems to be caught or improvements to be made by the programmer. This approach, which has been used since the dawn of computing, has sometimes been characterized by the humorous maxim “garbage in, garbage out.”

In contrast, ML systems use the data and the resulting output to create a program (Figure 1). This program can then be used in conjunction with traditional programming. This is one of many reasons that STMicroelectronics began investing in ML in 2016, as part of a strategy to create high-performance system architectures capable of addressing the increased complexity of these next-generation systems and applications.

this diagram shows how machine learning compares to traditional programmingFigure 1 ML systems use the data and the resulting output to create a program.

Thanks to the growing availability of affordable ML platforms, and improved development tools, the technology is becoming widely used in applications that include activity recognition, facial recognition, and voice recognition.

  • There are several different approaches of ML, each best suited to a particular class of applications:
    Supervised learning is based on feeding the system with “known” rules and features that represent the relationship of an input (functionality of an object, for example the color red) with an output (the object itself, like an apple).
  • In contrast, unsupervised learning only provides the inputs to the system, leaving the ML algorithm(s) to analyze them to discover unique/separable classes and patterns.
  • Reinforcement learning-based systems constantly improve their performance using self-training techniques. They do this by taking continuous inputs (such as an autonomous car that can receive input about road conditions) and feeding them to algorithms that compare current performance against the provided success criteria.

Supervised learning is the most mature, studied, and used approach for most ML applications, in part because it is much easier and less complicated than learning without supervision (Reference 4).

Whichever ML approach you choose, the key to achieving good accuracy is to fully train its algorithms by providing the program with enough data containing relevant examples of the conditions it will encounter (i.e. training data). The training data must also be managed properly, a process which begins with cleaning the data of artifacts and errors, and then labeling it using a consistent format, so it can be used for classification. The classification process involves identifying the characteristics of an object, labeling them, and including them in the system with a set of rules that can be used to produce a forecast. For example, red and round are features in the system that lead to the apple output. Similarly, a learning algorithm can also autonomously create its own rules that will apply when it comes to a large set of objects – in this case, a group of apples, which the machine discovers all share certain common properties, such as round and red.

High quality datasets are the key for developing accurate and efficient ML algorithms. To achieve this, the data must first be collected and cleaned to remove errors (bad data), outliers, and noise. Once cleaned, the data must be prepared by removing duplicates, correcting errors, and dealing with missing values. In certain cases, data normalization or data type conversions must also be performed. If these practices are skipped or poorly executed, it will be difficult for the system to detect the true underlying models.

But what’s the underlying importance of a well-composed ML training dataset? The simple answer is that these datasets contain the information for training the ML algorithm in the most accurate way. This can be done by extracting the individual measurable properties or characteristics of a phenomenon being observed. These properties, which correlate the phenomena we wish to identify with recorded data are called features. Choosing informative, discriminating, and independent features is a crucial step for effective ML algorithms. Feature selection and data cleaning should be the first and most important steps of model design. A good model design would also imply a proper model generalization, i.e., how well the concepts learned by an ML model recognize and react to features in specific test examples, as well as data the model has not previously encountered.

Another important element of a successful ML implementation is “data fit,” i.e. how closely a model’s predicted values match the observed (true) values. In fact, overfitting or underfitting is a very common cause of poor performance of an ML model. A model that has treated the noise (i.e. meaningless data) in the training data set as some or all of the actual signal (meaningful data), is considered “overfit” because it fits the training dataset but produces a poor fit with new datasets. One way to detect such a situation is to use the bias-variance approach, which can be represented as shown in Figure 2. Think of a basic ML algorithm like regression. The trade-off between an overly-simple (high bias) or an overly-complex (high variance) model is evident in Figure 2. Typically, we can reduce error from bias, but it might result in an increase of error from variance, or vice versa.

these graphs compare underfit and overfit conditionsFigure 2 Use the bias-variance approach to detect underfit and overfit.

Overfitting can be challenging to identify at first because it produces very good performance on trained data and the problem only becomes evident when it is exposed to new data not used for training. To address this, we can split our initial dataset into separate training and test subsets. In this way, we can define overfitting as too much focus on the training set, leading to complex relations that may not be valid in general for new data (testing set). On the contrary, we can define underfitting as too little focus on the training set, producing an algorithm neither good for training nor the testing dataset.

  • Detecting overfitting is useful, but it doesn’t solve the problem. Fortunately, we have several options to try. Here we want to mention the most used:
    Cross-validation is a powerful preventative measure against overfitting; it uses training data to generate multiple mini train-test splits. Use these splits to tune and validate the ML model, as shown in Figure 3.
  • Early stopping is the rule that can provide guidance as to how many iterations can be run before the learner begins to over-fit.

In standard k-fold cross-validation, we partition the data into k subsets, called folds. Then, we iteratively train the algorithm on k-1 folds while using the remaining fold as the test set (called the “holdout fold”).

this diagram shows k-fold cross validationFigure 3 In standard k-fold cross-validation, we partition the data into k subsets, called folds.

The MEMS sensors framework

The past few years have featured a steadily growing number of use cases built around IoT. Devices connected to applications residing in the cloud or remote servers are being used in homes to manage security, energy consumption, and appliances. Factories are optimizing operations and costs through predictive maintenance. Similar types of IoT devices are helping cities implement more effective control traffic and public safety. Logistics companies are tracking shipments, doing fleet management, and optimizing routes. Restaurants are ensuring food safety in refrigerators and deep fryers, retailers are deploying smart digital signage and implementing advanced payment systems, and the list goes on and on as ML systems, and the sensors they rely on (MEMS sensors in particular), can be found everywhere. A widespread system assumes the raw data from sensors is sent to a powerful central remote intelligence (cloud) for analysis and interpretation, a usage model that requires significant data bandwidth and computational capabilities. That model would lower responsiveness if you consider the processing of audio, video, or image files from millions of end devices.

At their present rate of growth, the demand for AI/ML services is expected to soon outstrip the ability of central cloud-based implementations to deliver them at the necessary scale and speed. In order to avoid “cloud-lock,” tomorrow’s solutions will need to support on-edge solutions that satisfy time responsiveness demands and deliver a better user experience. The on-edge paradigm requires the use of local microcontrollers (MCUs) that provide computing solutions that can be scaled to a particular application’s requirements (Reference 5). But what advantage can we obtain from edge computing over cloud solutions? Are the cloud and application processors really not enough to create an effective IoT system?

Over the few past decades, we have observed a steady exponential increase in the computational power of MCUs and, in many cases, a contextual reduction of power consumption. However, in most IoT applications, the cloud-computing paradigm requires them to do little more than collect data from their sensors and pass it on to the cloud infrastructure for processing. So, although modern MCUs are typically clocked at hundreds of MHz and with relatively high memory capacity, they are idle most of the time. The areas of the graph in Figure 4 show the overall computational budget of the MCU. The pink areas indicate when the MCU is active for tasks such as networking, sensor data acquisition, updating displays, timers, and other interrupts. The blue areas represent when the MCU is idle. With hundreds of millions of such devices deployed in the real world (and more every day), the amount of unutilized computational power is remarkably large.

this graph shows the overall MCU computational budgetFigure 4 This graph shows the overall computational budget of the MCU.

Using a distributed approach, modern on-edge computing capabilities can significantly reduce both the bandwidth required for data transfer and the processing capacity of cloud servers. It also has the potential to give users sovereignty over their data by pre-analyzing personal source data so it can be delivered to service providers with a higher level of interpretation/abstraction or anonymization.

this diagram compares edge vs cloud paradigmFigure 5 This illustration shows the on-edge vs cloud paradigm.

At ST, we have been helping our customers meet the challenges of the IoT with a broad and growing portfolio of on-edge computing solutions for ML and DL implementations. ST’s latest set of AI solutions allows developers to map and run pre-trained ML algorithms on most members of the STM32 microcontroller portfolio. In addition, ST has integrated ML technology into its advanced inertial sensors to develop a variety of applications, such as adaptive activity-tracking for mobiles and wearables, which deliver very high levels of performance and improved battery life.

Advanced sensors, such as the LSM6DSOX (IMU), contain an ML core and advanced digital functions to provide to the attached STM32 or application central system capability to transition from an ultra-low power state to high performant, high accuracy AI capabilities for battery-operated IoT, gaming, wearable technology, and consumer electronics.

[Continue reading on EDN US: Enhanced on-edge paradigm]

Roberto Mura works on Machine Learning and Deep Learning applied to embedded devices at STMicroelectronics.

 

References

  1. Smart Sensors Market – Growth, Trends, and Forecast (2020 – 2025), Mordor Intelligence
  2. Machine Learning with STM MEMS Sensor, Hackster.io
  3. What is Machine Learning? A definition, Expert System
  4. Basic Concepts in Machine Learning, Machine Learning Mastery
  5. Why Machine Learning on The Edge?, Towards Data Science
  6. Orlando: An SoC With a Neural Network and the Future of IoT?, The ST blog
  7. Embedded Algorithms for Motion Detection and Processing, M. Castellano et all., Embedded World 2018
  8. STMems Machine Learning Core, github
  9. eXplore Machine Learning in sensors, STMicroelectronics
  10. FP-AI-SENSING1, STMicroelectronics
  11. STM32CubeMX, STMicroelectronics

Related articles:

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment