Ron Martino, senior vice president and general manager of the edge processing business of NXP Semiconductors, talks about enabling edge intelligence.
A key topic today and clearly covered in many talks at embedded world 2021 is the widespread adoption of edge computing to enable edge intelligence. Some projections expect 90% of all edge devices will use some form of machine learning or artificial intelligence by 2025.
What are the issues around enabling this edge intelligence, and how do you make it happen? This is the topic of conversation in a recent podcast with Ron Martino, senior vice president and general manager of the edge processing business at NXP Semiconductors. While you can listen to the full podcast here, we present some extracts from the discussion in this article.
Defining edge computing
At its core, edge computing is the ability for efficient processing closer to the user. We can bring quicker insights to data. Can you define edge computing in the context of how NXP addresses it?
Martino: Edge computing put simply is distributed local computation and sensory capability. It effectively interprets, analyzes and acts on the sensor data to perform a set of meaningful functions. It doesn’t try to be a replacement or an alternative to cloud, it becomes complimentary. Classically, in voice assistance for example, a lot of data is sent to the cloud where the higher compute capacity is used to enhance the experience. Edge computing is evolving to become smarter, and then more intelligent. Smart edge computing balances the use of local computing with central or cloud computing. As this evolves even further, there’s more intelligence where we want the end devices to have more capability to do the interpretation, the analysis, and then make decisions locally.
Can you give some examples of how edge computing is being used to enable better productivity and safety?
Martino: In the case of productivity, a great example is enhanced workforce: leveraging edge processing or wearable devices enabled with vision and machine learning, where a worker can diagnose a problem, and more rapidly repair it, whether it’s in a home or in a factory.
Intelligent edge devices enhance safety by recognizing various dangerous signals: by recognizing alarms, a person who has fallen, or glass breaking, and then determine the issue by using additional sensor information and computation. Whether it uses radar sensing devices that NXP is developing, whether it is using vision capability, or just the interpretation of audio input into the device.
If we go to greener and energy consciousness, one concept to be addressed is that of vampire power, where you plug in devices and they’re doing nothing but still consuming power.
We are also moving to a concept of ‘aware edge’ where devices respond more in a human-like behavior. They start to understand their environment, they aggregate inputs and interact with other devices to collect information and understand the context of the situation, and then take decisions accordingly. A simple practical example of this is traffic patterns with local capability that can interpret crowds and different congestion points and optimize the situation locally, by observing and sensing the number of cars and the conditions to make driving more efficient so you don’t waste time.
The technology pieces enabling edge intelligence
From a technology perspective, what are the pieces that make up that intelligent edge and aware edge?
Martino: Let’s start with the foundation. You need to have compute platforms, and they need to scale. They need to be energy efficient. Unlike in the past, now it’s really about multiple independent heterogeneous compute subsystems. That’s basically having a GPU, a CPU, a neural net processing unit, a video processing unit and a DSP.
How do you optimize these different hardware accelerators and compute devices and optimize for a given end application? That’s where NXP really excels in having the scalable range of compute with all these other elements. Then there’s the integration of optimized hardware accelerators or capabilities targeted at voice applications, human-machine interaction, that involves both vision and voice together, and then do this in a way where it truly involves ultra-low leakage with operational modes that can adjust to really optimize the energy usage, even with these large on-chip memories, necessary as you look at some of the workloads.
This goes on to optimization of machine learning capability, security integration with the highest levels of coverage for many different attack surfaces, efficient connectivity, efficient energy usage, as well as open standards. It can also take advantage of technology NXP is offering, such as high accuracy distance measurement, whether it uses our UWB technology for locating, in a very accurate way, the physical location of a given person or tracking device.
The final thing is wrapping this all in a seamless user experience because if it’s not easy to use, and it’s not natural to use, then it won’t be used. So, getting a seamless, comfortable experience in place is absolutely critical.
How does a user get to build such solutions?
Martino: We offer everything from just a basic offering of a processor or a microcontroller, all the way up to a reference platform pre-optimized for local voice, for vision, detection, and inference capability, or the combination of those. We put together reference platforms that a customer can purchase, such as our RT family of devices. We have a face recognition offering that can be purchased, and that’s a fully enabled and designed of a system that a customer can take as a starting point in modifying to their needs where they want to specialize or where they want to brand it.
Technology differences in consumer and industrial use cases
Most in industry will agree that intelligent devices and systems in our homes and work are gaining traction. What are some of the technology differences between the plain old IoT, as you would say, and the industrial markets?
Martino: The connectivity standards, environmental requirements, longevity needs (which can be 15 plus years), and the safety requirements are much more extensive and demanding in the industrial space when you compare it to the [consumer] IoT world. One area NXP is investing in is time-sensitive networking (TSN), and the integration of both the Mac and switch into a whole set of devices that can support daisy chained setups of multiple machines, work and support endpoint functions, leveraging this more deterministic TSN backbone, which also supports much higher data rates and throughput that many of the legacy standards are converging on.
Compare this to [consumer] IoT markets. There’s a much broader need for extreme energy efficiency, higher use of voice HMI, wireless connectivity, and shorter life cycle for applications like smart homes and wearables. On the wearable front, you want great user experience, but you want the longest battery life. The optimization of these edge devices to perform their functions, but then shut down and preserve battery life is very important, and that really rich user experience has to be done in the most efficient way, because that’s the time that it’s burning the battery.
The interoperability challenge
In the smart home area, it’s often difficult to take a product from one company and make it work with other devices as well. How is NXP is trying to change that smart home wireless interoperability challenge?
Martino: Looking at smart home devices as an example, the standards and interoperability capability is very fragmented. We have a project named CHIP, or “Connected Home over IP”, standards project. It has NXP as well as a number of other industry leaders working together to try to consolidate, not to a proprietary standard, but to an open standard common across all of industry, and allows people to build on this open standard.
The focus of this project is to build on the many years of work that NXP and others have done around ZigBee and Thread and the ZigBee Alliance, and then build upon that with an upper layer capability, leveraging the technologies that the Amazon’s, the Apple’s, and the Google’s have rolled out to build this open standard which we reference as CHIP, and establish this common linkage between devices. When you plug something in, it will be very simple to connect.
NXP’s plan is to have actual products in the market later this year, with the first versions of the CHIP standard.
Addressing the complexity and cost of adding edge intelligence
Turning to machine learning and AI on the edge. It all sounds rather complex and costly right?
Martino: For many, when you talk about AI and ML, it’s a very complex abstract concept. There are projections that 90% of all edge devices will use some form of machine learning or artificial intelligence by 2025. We truly believe that’s the case, and we are rolling out products that are optimized around this. It’s a combination of what we do to optimize the hardware that’s used, the processors and the microcontrollers to run this capability. For the end user, it’s more about how complex is it to deploy a practical ML that’s meaningful to the end use case.
There are many companies that want to collect data and create their own models. What NXP is focused on is, how do we enable a cloud agnostic capability that allows flexibility in a simple user interface or development environment?
That’s what we recently announced with our investments with Au-Zone, and in 2021 will be rolling out an enhanced development environment where you can choose the type of content that you bring in. Your own data, models that you have, or models that you have chosen to acquire through your favorite source or cloud provider, and bring that, and then optimize it, and deploy it onto the end device. Because it’s that optimization.
How does machine learning increase the cost of the end solution?
Martino: If you have a very complex, heavy machine learning model or capability, that’s going to require much higher computational capability, and the higher the computational capability, the more expensive it’s going to be. You can choose to do that on an edge processor, or you can choose to deploy that to a cloud. When we try to tune these use cases or these models for a specific use case, you can then become very efficient, and then you can leverage traditional technology scaling and Moore’s law to really add hardware acceleration specific for ML, that doesn’t take up much silicon area.
It becomes a small cost adder, but a very optimum ability to perform that given work that you want to. Whether it’s detecting people and identifying who they are locally, as one example, you can do that very efficiently on a microcontroller now, that’s optimized with very, very efficient silicon implementation. Then you can make it scalable as well with some of our processors, where you can scale to an external higher performance neural net processor, or you work in a complimentary fashion with the cloud. Again, all have a cost to them, and it depends on the complexity of the task, but it can be very efficient to very complex capability that you can roll out.
There’s increasing concern about biases in ML models and AI. What is the role of industry in helping ensure ethical AI?
Martino: It needs clear transparency of operation, whether it’s simple concepts around, “I want to know it’s listening to me or watching me,” but also, how is it determining its conclusion to then take an action becomes very important. Security standards to make sure that the systems are secure and don’t have backdoor access or other sensitivities or vulnerabilities in terms of their attack surface, such that somebody can access an AI system and then influence it to do certain things or make certain decisions that may be favorable to the person who is attacking the system.
How do you implement AI systems that don’t have a preset bias that, from a principle base, is wrong? At NXP, we’ve rolled out an AI ethics initiative, which underscores our commitment to this ethical development. Within that, we talk about being good, we talk about preserving human-centric AI, which really is around avoiding subordination to, or coercion by an AI system, as well as this transparency, the high standards to scientific excellence, as well as trust in AI systems.
What challenges do you see remain with implementing edge technology?
Martino: This is an ongoing activity and there are many areas for continued optimization. Energy efficiency, and driving and leveraging energy harvesting concepts, and near threshold operation of devices is a continued investment by many in the industry. Security and the need for protecting data and continuing to advance this is an ongoing activity.
Investments in silicon specific signatures, and different types of cryptography, and ways of performing computing in a protected way such as homomorphic encryption, performing calculations in an encrypted environment and never decrypting it. Then extending that around connectivity of the throughput in latency requirements, as well as the power consumption. In optimization of that, we’ll continue to optimize connectivity, and bring that into these edge devices in more and more efficient ways.
Finally, this whole concept of aware end intelligence, we’re in a third generation of developing and implementing neural network processors or subsystems that go into our processors. That is driving improvements in efficiency and scaling, but there is continued research in this area in terms of driving higher levels of efficiency with accelerators, and different technologies around spiking neural nets, as well as quantum AI. In the near term, clearly, we’ll see a continued evolution around more traditional accelerators, and the integration of those into these scalable processors that NXP is bringing to the market.
You can listen to the full 27 minute podcast, “Empowering the edge everywhere”, here.
This article was originally published on Embedded.
Nitin Dahad is a correspondent for EE Times, EE Times Europe and also Editor-in-Chief of embedded.com. With 35 years in the electronics industry, he’s had many different roles: from engineer to journalist, and from entrepreneur to startup mentor and government advisor. He was part of the startup team that launched 32-bit microprocessor company ARC International in the US in the late 1990s and took it public, and co-founder of The Chilli, which influenced much of the tech startup scene in the early 2000s. He’s also worked with many of the big names – including National Semiconductor, GEC Plessey Semiconductors, Dialog Semiconductor and Marconi Instruments.
New products & solutions, whitepaper downloads, reference designs, videos
Register, join the conference, and visit the booths for a chance to win great prizes.