Why the future of embedded software lies in containers

Article By : Till Adam

Containers are becoming increasingly important in IoT, industrial, and automation applications.

In a recent article, my colleague Andrew Hayzen puts it perfectly: “To understand containers is to understand the future of software development.” We couldn’t agree more, especially for embedded systems.

But what are containers? What advantages do they offer? Are they right for your software shop and your embedded products? Will they improve development, testing and deployment? Are there disadvantages? In this article, we’ll talk about containers from several different angles in order to help you answer these questions for yourself.

Containers defined

Containers have their origins in the Linux chroot and namespace capabilities. Though they are available in some non-Linux environments, containers are most common in Linux systems, so that’s what we’ll focus on here.

Containers are sometimes called “packaging abstractions”. This is because they can wrap up a program along with all its dependencies into a single, isolated executable environment. Containers have also been described as lightweight virtual machines.

These descriptions are accurate, but insufficient. Containers are indeed used to wrap up applications and/or services and all their dependencies, and they do isolate multiple software components running on the same hardware. These descriptions also don’t make it sufficiently clear that containers do many other things:

  • Containers bundle an application with all the filesystem pieces it needs to operate: all the binary executables, libraries, utilities, data, and configuration files the application needs during runtime.
  • They give the bundle dedicated namespace, memory, and networking views, insulating it from the rest of the system. This insulation has benefits for development, testing, deployment, and runtime security.
  • Containers don’t need the high-end hardware with virtualization support, or the full OS stacks, bootable disk images, and virtualized devices common to hypervisor systems.
  • They’re lightweight, making them ideal for embedded systems. You simply add them to your Linux OS image, with little impact on system space and power requirements. 

Containers or hypervisors?

Hypervisors can run multiple, completely independent operating systems in parallel virtual machines, entirely isolated from each other. To do this, they need hardware with support for special instruction set extensions. This uses more resources since OS overhead is added to each virtual machine. Efficient access to peripherals is also a challenge and again highly depends on the features of the specific hardware platform.

In contrast, containers allow independent applications to run on the same Linux host OS and share its kernel. Kernel namespace features are used to give these applications a restricted view of the system, creating an appearance of independent systems. The namespace system is flexible enough to give an application an isolated file system but also give the application direct access to peripherals or networks. Since the kernel is shared, all containers see the same kernel version. However, because applications within a container are just using the underlying Linux OS directly with no emulation or virtualization, a container cannot host a non-Linux operating system. On the positive side, this also means that containers do not require special hardware support: if a modern Linux kernel runs on the target hardware, containers can be used.

With either a hypervisor or container, applications must be built for the target platform and run efficiently on the CPU. Virtual machine solutions that can emulate a hardware platform to run software compiled for other architectures do exist. However, due to their large performance impact, they are not used in embedded systems.

Development

Even if you never use containers on your embedded targets, you can use them to great effect in an embedded development environment. Here are several ways:

Guaranteed toolkits

You can place your development tools into a container, so you can use the same tools on multiple platforms without an installation. This helps ensure that your team uses exactly the same tools and build environment. Spinning up a development environment for new developers or new machines becomes trivial, and you can be confident that the build server always uses the same tools as the development team.

Different tool chains

Different variants of a tool chain can be placed in separate containers. This enables you to test applications against multiple tool chains without worrying about how these tool chains co-exist, or even whether they’re installed correctly.

You can experiment with new software in a controlled way without worrying that your tinkering might pollute your development environment. And, with tool chains neatly in containers, you can make software patches against older versions of tools, even after mainline development has moved on to another release.

Multistage containers

Multistage containers can be used to create a container structure that layers the development environment on top of the production environment. This helps you ensure that when you’re ready to deploy, all your tools are stripped out safely without impacting the production build—no overlooked wrenches or clamps left in the works.

Complete runtime snapshots

Containers are easily versioned, so it’s simple to snapshot the entire runtime environment for branches, forks, or releases. This tight control also helps when certifying software; for example, to IEC 61508 or ISO 26262 safety integrity levels (SIL and ASIL). External auditors can easily identify exactly what software and libraries are in use on your target.

Distributed deployment

Containers simplify the building of complex distributed systems, thanks to their support for modular building and deployment. This is becoming increasingly important in IoT, industrial, and automation applications, which often require regular refreshes of many different components over long time periods.

Embedded targets

Containers offer many benefits for systems running on embedded targets too. For starters, they make it easier to develop software that’s independent of the host system, minimizing the pain of switching between hardware platforms or hardware vendors. If the OS will run on the hardware, so will the software in the container.

Thanks to this flexibility with hardware, development teams can develop a workflow and configuration management system that can easily deliver multiple product variants or scalable product families from a single code base.

Additionally, placing the target environment into a container helps with provisioning. Setting up a fresh target environment is quicker than provisioning a raw target and is more reliable than cleaning off a target that has already been used.

Help with hardware shortages

In the embedded space, we’re often developing systems for hardware that’s in short supply. Sometimes it hasn’t even reached production. With proper configuration, containers can help minimize behavioral differences between your development and target systems for those components that do not directly interact with the hardware. Confident that what you develop will be what ultimately runs on the target, you can proceed with your project, even in the absence of target hardware.

Testing

Container start up is fast enough to allow a new container to be spun up for each test. With everything reinitialized, reproducibility and consistency of tests improves over tests conducted in environments that might have been altered by previous software executing in the test environment. In addition, containers are sufficiently lightweight to permit the running of parallel test sessions in different containers.

Since containers can be easily moved between development machines and embedded targets, you can run compute-intensive tests on more powerful development hosts instead of on slower embedded targets. And, since containers support the creation of multiple nodes within a virtual network on the same machine, for applications that need to talk between peers, clients, and servers, a container can drastically simplify creation of your test environment.

Deployment

With containers you can decouple application development from the hardware vendor’s Linux stack. That is, your development team can implement a container-based environment on top of the hardware vendor’s Yocto base system.

With this approach, even when release cycles are mismatched, applications aren’t disrupted by the vendor’s patches and updates. Issues caused by swapping out the underlying OS and drivers are eliminated, helping you keep your products current with the latest security updates, bug fixes, and performance improvements.

Finally, containers offer OTA (over-the-air) solutions a modular architecture they can leverage to enable updates with minimal disruption to the underlying application.

Security

Containers provide an additional layer of protection for applications and services, making it even harder for hackers to misuse the under­lying system or other applications. For one, containers can be signed, and unsigned containers (which are untrusted) can be prevented from running.

Second, because they enable modular updates, containers facilitate keeping security-critical software updated without interfering with the application. Finally, containers decrease the attack surface exposed by the application, which reduces the risk of vulnerabili­ties being exploited.

Opening up the talent pool

Shortage of trained talent is a recurring issue for embedded software development. Containers won’t solve this problem, but they will open up the talent pool and make it easier to get new talent up to speed.

A pre-containerized target does away with much of the complex configuration typically needed to bring up an embedded target, so almost any developer can start using an embedded system right away. We could argue that this is true for any board, so long as the vendor provides a complete and well-documented BSP. Unfortunately, this is not always the case, and most board documentation is written assuming a strong base-level of knowledge.

Fortunately, containers can provide a consistent interface across different vendor hardware that’s familiar to developers outside the embedded space. This helps cloud and web developers who have experience with containers move into the embedded space.

Architectures

There are three common approaches to architecting an embedded system with containers. The solutions you choose will depend on your requirements and on whether you’re working from an existing code base or starting from scratch.


Figure 1: Container usage in embedded systems generally fall into one of three categories. (Source: KDAB)

Monolithic

The structure of monolithic containers is very basic, making them an excellent starting point for working with existing applications. A monolithic container encapsulates the entire application into a single container for use on a target. This is often the approach used for devices that have a display and need to run a graphical framework such as Qt.

Headless

Headless and IoT edge devices are in many ways similar to cloud or web applications. They don’t need a display and are easy to containerize. In fact, many of these devices use web services to communicate, making them a natural fit.

However, a containerized embedded application probably needs to access sensors, peripherals, or other hardware on the device. This is something you’re unlikely to find in web or cloud containers, so you’ll need an embedded-savvy container environment.

Microservices

The microservices architecture decomposes the software into many, distinct services, with each independent service in its own container. This approach allows changes to individual services without impact to the rest of the application.

Decomposing your embedded application into small, containerized services can help insulate an application from its internal dependencies. For example, by placing a third-party binary for an application into its own container you can insulate that binary from the rest of the system, which you can update without worrying that you’ll break the application. Similarly, you can containerize any component – for example, partner applications, open-source libraries, and protocol stacks – with an update cadence different from your application’s schedule.  Each container will have the exact version of the libraries, frameworks, and languages the component was tested against.

Should you use containers?

Containers offer many benefits, but they aren’t for every development environment or every embedded system.

Containers for the development environment

If you’re considering only your development environment, there’s not much to pay and a great deal to gain from containers. Once you’re past the relatively easy learning curve and you’ve implemented containers on developer, build, and QA machines, you’ll get a lot of bang for the buck in provisioning, versioning, testing, and building.

We suggest gradually incorporating containers into your development workflow, so you’ll be ready when the time comes to containerize your targets.

Containers on embedded targets

If you’re developing deeply embedded applications for targets with 8- or 16-bit processors, less than 1MB of RAM, or incapable of running Linux, then containers are clearly not a go. If you’re developing for 32- or 64-bit embedded platforms, then the question is more nuanced.

Since the benefits of containers for development environments are clear, we’d recommend starting by incorporating containers into your normal development workflow. Once you’ve gained some expertise with them, you’ll be better able to evaluate if they’re suitable for your project. When you’re working through your decision, questions you should ask include:

  • Does your hardware vendor provide images that are pre-loaded with container software?
  • Will you need to update different portions of your application independently due to diverging dependencies, multiple suppliers, different update frequencies, etc?
  • Can your product be cleanly divided into distinct components that are updated independently?
  • Do you have OTA requirements where a container-based solution could help you roll out new changes?
  • Does your product have large-scale, complex services (such as industrial or building automation) where modular updates are essential?

Conclusion

Containers offer a straightforward approach to managing software development, testing, builds, and deployment. They reduce the pain of redeploying embedded software on new boards and are ideal for redeploying services and applications to multiple Linux distros.

Containers don’t require specialized hardware. You simply add them to the Linux OS system, and they’ll run on any hardware that supports that distro. Containers add a layer of insulation between the software inside the container and the rest of the system, reinforcing runtime security, and—not least importantly—they can help ease the learning process for new talent you’ve recruited into the embedded space.

This article was originally published on Embedded.

Till Adam is a technologist with a liberal arts background. As Chief Commercial Officer at KDAB, he solves deeply technical as well as business, tactical, and strategic challenges. Till also works with companies at the leading edge of development, making him an expert on the latest trends and technologies.

 

Leave a comment