Software testing matters in embedded systems

Article By : Mark Pitchford

High-level requirements like state-machine functionality and complying with standards for cybersecurity and functional safety are crucial.

In the world of embedded systems, it isn’t just the technology that continues to develop and evolve. The tools and the methods used to develop that technology are maturing and improving in tandem.

In the early 1980s, I developed software for a small metrology company, applying engineering math to coordinate measuring machines (CMMs). I’d like to think that I was pretty good at it. But our development lifecycle essentially regarded production software as a sandbox. We’d start with production code, add functionality, perform some fairly rudimentary functional testing, and ship it.

In such a small company, our engineering team naturally included both software and hardware specialists. In hindsight, it is striking that while the need for extensive customer support was a given for the software we developed, there was nowhere near the same firefighting culture for the hardware it ran on.

Software development is an engineering discipline

Part of the difference between the software and hardware support is a result of the crude development process. But the sheer malleability of software and the resulting capacity for ever-increasing functionality also play a major part. Simply put, there are far more ways to get it wrong than to get it right, and that characteristic demands that it is treated as an engineering discipline.

There’s nothing new about any of this. Leading aviation, automotive, and industrial functional safety standards such as DO-178, ISO 26262, and IEC 61508 have demanded such an approach for years. But having an engineering discipline mindset is essential if you are to reap the benefits of today’s cutting-edge development and test tools, which are designed to serve such an approach.

More recently, the importance of software testing has been shown by the development of ISO/IEC/IEEE 29119, an international set of standards for software testing that can be used within any software development lifecycle or organization.

Requirements matter

Electrical system design often starts with a state machine and an understanding of the different operating modes for a particular product. Engineers can typically map that state-machine functionality to logic very quickly and easily. In case the state machine gets more complicated, it is often translated into software.

High-level requirements are essential to making sure the system functions correctly. Such requirements characterize the business logic and intended functionality and enable to evaluate whether the system does what it’s supposed to do. Best practices follow the flow from high-level requirements through analysis into coverage, and naturally, requirements traceability tools are designed to support this.

In the state-machine model, requirements that characterize each state are examples of high-level requirements. Tracing the execution path through code to ensure that each requirement is interpreted correctly is a very good way to check correct implementation.

Functional safety standards extend this to a concept of requirement traceability. They often mandate that users exercise all of their code from high-level requirements and explain and test any uncovered cases with low-level testing. Lately, the “shift left” paradigm in cybersecurity echoes this message, as the V-model illustrates in Figure 1.

diagram of a V-model product development processFigure 1 As the name implies, V-model embodies a product development process that shows the link between the test specifications at each phase of development. Source: LDRA

Test components, then test system

In any engineering discipline, it’s important to make sure that components work correctly on their own before being integrated into a system. To apply that thinking to software, engineers need to define lower-level requirements and ensure that each function and set of functions play their part. Engineers also need to ensure they present appropriate interfaces to the rest of the system.

Unit testing involves parameterizing inputs and outputs at the function and module levels, performing a review to ensure the connection between inputs and outputs is correct and follows the logic with coverage. Unit test tools can provide proven test harnesses and graphical representation connecting individual inputs and outputs to execution paths and enable their correctness to be verified.

It’s also important to understand interfaces, both at the functional and module levels. Static analysis tools can show these interfaces and connect the logic at different levels.

Find problems as early as possible

Engineers from any discipline will tell you that the earlier problems are discovered, the less it will cost to fix them.

Static analysis performs source code analysis to model the execution of a system without actually running it. Available as soon as code is written, static analysis can help developers to maximize clarity, maintainability, and testability of their code. Key features of static analysis tools include:

  1. Code-complexity analysis: Understanding where your code is unnecessarily complicated, so engineers can perform appropriate mitigation activities.
  2. Program-flow analysis: Drawing design-review flow graphs of program execution to make sure that the program executes in the expected flow.
  3. Predictive runtime error detection: Modelling code execution through as many executable paths as possible and looking for potential errors such as array bounds overflows and divide-by-zeros.
  4. Coding standards adherence: Coding standards are often chosen to ensure a focus on cybersecurity, functional safety, or in the case of the MISRA standards, either one or both. Coding standards help to ensure that code adheres to best programming practices, which is surely a good idea irrespective of the application.

screenshot examples of static analysisFigure 2 Activities like static analysis are an overhead in the early part of the development lifecycle, but they pay dividends in the long run. Source: LDRA

Developing code of adequate quality

It’s no surprise that higher quality engineering products are more expensive. Adhering to any development process comes at a cost, and it may not always be commercially viable to develop the finest possible.

Where safety is important, functional safety standards will often require an analysis of the cost and the probability of failure. This risk assessment is required for every system, subsystem, and component to make sure that proportionate mitigation activities are performed. That same principle makes sense whether systems are safety-critical or security-critical. If you test every part of the system with the same level of rigor, you will over-invest in parts of your system where the risk is low and will fail to adequately mitigate failure where the risk is higher.

Software safety practice starts with understanding what will happen if the component or system fails and then tracks that potential failure into appropriate activities to mitigate the risk of it doing so. Consider, for example, a system that controls an airplane’s guidance where failure is potentially catastrophic. Rigorous mitigation activities must be performed at the sub-condition coverage level to ensure correct code generation.

Contrast that with an inflight entertainment system. If this system fails, the aircraft will not crash, so testing an inflight entertainment system is less demanding than a system where there is the potential for immediate loss of life.

The malleability of software is both a blessing and a curse. It makes it very easy to make a system do practically anything within reason. But that same flexibility can also be an Achilles’ heel when it comes to ensuring that software doesn’t fail.

Even in the commercial world, while not all software failures are catastrophic, they are never desirable. Many developers work in the safety- and security-critical industries, and have no choice but to adhere to the most exacting standards. But the principles those standards promote are there because they have been proven to make the resulting product function better. It therefore makes complete sense to adopt those principles in a proportionate manner regardless of how critical the application is.

Despite the confusing plethora of functional safety and security standards applicable to software development, there is far more similarity than difference between them. All of them are based on the fact that software development is an engineering discipline, demanding that we establish requirements, design and develop to fulfill them, and test early against requirements.

Adopting that mindset will open the door to an entire industry’s worth of supporting tools, enabling more efficient development of higher-quality software.

This article was originally published on EDN.

Mark Pitchford, technical specialist with LDRA Software Technology, has worked with development teams looking to achieve compliant software development in safety and security critical environments.

Related articles:

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment