Development practices and tools can prevent coding errors

Article By : Mark Richardson and Andrew Banks

Software complexity in embedded designs is driving the need for better development practices and tools to avoid coding errors

Application size and complexity has compounded significantly over the last decade. Take the automotive sector as an example. According to The New York Times, 20 years ago, the average car had a million lines of code, but 10 years later, the General Motors 2010 Chevrolet Volt had about 10 million lines of code—more than an F-35 fighter jet. Today, an average car has more than 100 million lines of code.

The shift to 32-bit and higher processors with lots of memory and power has enabled companies to build far more value-added features and capabilities into designs; that’s the upside. The downside is that the amount of code and its complexity often result in failures that impact application security and safety.

It’s about time for a better approach. Two key types of errors can be found in software and addressed using tools that prevent errors from being introduced:

  • Coding errors: An example is code that tries to access outside the bounds of an array. These kinds of problems can be detected by performing static analysis.
  • Application errors: These can only be detected by knowing exactly what the application is supposed to do, which means testing against the requirements.

Address these errors, and design engineers will be a long way down the path towards more safe and secure code.

An ounce of prevention via code checking

Errors in code happen just as easily as errors in email and instant messaging. These are the simple errors that happen because engineers are in a hurry and don’t proofread. But with complexity comes a gamut of design errors that create huge challenges. Complexity breeds a need for a whole new level of understanding of how the system works, how data is passed, and how values are defined. Whether errors are caused by complexity or some kind of human issue, they can result in a piece of code trying to access a value outside the bounds of an array. And, a coding standard catches that.

How to avoid such errors? Don’t put them there in the first place. While this sounds obvious—and near impossible—this is exactly the value that a coding standard brings to the table.

In the C and C++ world, 80% of software defects are caused by the incorrect or ill-advised use of about 20% of the language. The coding standard creates restrictions on the parts of the language known to be problematic. The result: defects are avoided, and software quality greatly increases. Let’s take a look at a couple of examples.

Most C and C++ programming errors are caused by undefined, implementation-defined, and unspecified behaviors that are inherent in each language, which lead to software bugs and security issues. This implementation-defined behavior propagates a high-order bit when a signed integer is shifted right. Depending on the compiler engineers use, the result could be 0x40000000 or 0xC0000000 since C does not specify the order in which arguments to a function are evaluated.

code example behavior of constructs depends on compiler usedFigure 1 The behavior of some C and C++ constructs depends on the compiler used. Source: LDRA

In Figure 2—where the rollDice() function simply reads the next value from a circular buffer holding the values “1,2,3 and 4”—the expected returned value would be 1234. But there is no guarantee of that. At least one compiler generates code that returns the value 3412.

code example of a rollDice() functionFigure 2 The behavior of some C and C++ constructs is unspecified by the languages. Source: LDRA

There are lots of other pitfalls in the C/C++ language: use of constructs such as goto or malloc; mixes of signed and unsigned values; or “clever” code that may be highly efficient and compact but is so cryptic and complex that others struggle to understand it. Any of these issues can lead to defects, value overflows that suddenly become negative, or just make code impossible to maintain.

Coding standards provide the ounce of prevention for these ills. They can prevent use of these problematic constructs and prevent developers from creating undocumented, overly-complex code as well as checking consistency of style. Even things like verifying the tab character is not used or parentheses are positioned in a specific position can be monitored. While this seems trivial, following the style tremendously helps manual code review and prevents mix-ups caused by a different tab size when code is viewed in another editor—all distractions that prevent a reviewer from concentrating on the code.

MISRA to the rescue

The best-known programming standards are the MISRA guidelines, first published in 1998 for the automotive industry and now commonly embraced by many embedded compilers that offer some level of MISRA checking. MISRA focuses on the problematic constructs and practices within the C and C++ languages, recommending the use of consistent stylistic characteristics while stopping short of suggesting any.

The MISRA guidelines provide useful explanations about why each rule exists, along with details of the various exceptions to that rule, and examples of the undefined, unspecified, and implementation-defined behavior. Figure 3 illustrates the level of guidance.

snapshot of MISRA rule 5.4Figure 3 These MISRA C references pertain to undefined, unspecified, and implementation-defined behavior. Source: MISRA

The majority of MISRA guidelines are “Decidable,” meaning the tool can identify whether there is a violation; but some are “Undecidable,” implying that it’s not always possible for the tool to deduce whether there is a violation.

An uninitialized variable passed to a system function that should initialize it might not register as an error if the static analysis tool doesn’t have access to the source code for the system function. There’s the potential for a false negative or a false positive.

In 2016, 14 guidelines were added to MISRA to provide checking for security-critical code, not just safety. Figure 4 illustrates how one of the new guidelines—Directive 4.14—solves this problem and helps prevent the pitfalls due to undefined behavior.

snapshot of MISRA Directive 4.14Figure 4 MISRA Directive 4.14 helps prevent the pitfalls caused by undefined behavior. Source: MISRA

The rigors of coding standards were traditionally associated with functionally safe software for critical applications such as cars, aeroplanes, and medical devices. However, the complexity of code, the criticality of security, and the business importance of creating high-quality, robust code that is easy to maintain and upgrade make the coding standards crucial in all development operations.

By ensuring errors aren’t introduced into code in the first place, development teams must:

  • reduce the need for extensive debugging,
  • gain better control of schedule, and
  • control ROI by reducing overall costs.

Code checking offers a toolbox with huge potential benefits.

A pound of cure with test tools

While code checking solves many problems, application bugs can only be found by testing that the product does what it is supposed to do, and that means having requirements. Avoiding application bugs requires both designing the right product, and designing the product right.

Designing the right product means establishing requirements up front and ensuring bidirectional traceability between the requirements and the source code, so that every requirement is implemented, and every software function traces back to a requirement. Any missing or unnecessary functionality—that doesn’t meet a requirement—is an application bug. Designing the product right is the process of confirming that the developed system code fulfills the project requirements. You achieve that by performing requirements-based testing.

Figure 5 shows an example of bidirectional traceability. The single function selected traces upstream from the function to a low-level requirement, then to a high-level requirement, and finally to a system-level requirement.

example of bidirectional traceability with a function selectedFigure 5 This is an example of bidirectional traceability with a single function selected. Source: LDRA

Figure 6 shows how selection of a high-level requirement displays both upstream traceability to a system-level requirement and downstream to low-level requirements and on to source code functions.

example of bidirectional traceability with requirements selected

Figure 6 This is an example of bidirectional traceability with requirements selected. Source: LDRA

This ability to visualize traceability can lead to the detection of application bugs early in the lifecycle.

Testing code functionality demands an awareness of what it is supposed to do, and that means having low-level requirements that state what each function does. Figure 7 shows an example of a low-level requirement, which, in this case, fully describes a single function.

code example of a low-level requirement that describes a single functionFigure 7 This is an example of a low-level requirement that describes a single function. Source: LDRA

Test cases are derived from low-level requirements as illustrated in Figure 8.

tables showing test cases are derived from low-level requirementsFigure 8 Test cases are derived from low-level requirements. Source: LDRA

Using a unit test tool, these test cases can then be executed on the host or the target to ensure that the code behaves the way the requirement says it should. Figure 9 shows that all the test cases have been regressed and passed.

list of test cases that have been regressed and passedFigure 9 This is how a tool performs unit tests. Source: LDRA

Once test cases have run, the structural coverage should be measured to ensure that all the code has been exercised. If the coverage is not 100 percent, it’s possible that either more test cases are required or superfluous code should be removed.

New habits in coding

No question about it, software complexity—and its errors—have mushroomed with connectivity, faster memory, rich hardware platforms, and specific customer demands. Adopting a state-of-the-art coding standard, measuring metrics on the code, tracing requirements, and implementing requirements-based testing provide development teams the opportunity to create high-quality code and reduce liability.

The extent a team adopts these new habits when no standards require compliance hinges on corporate recognition of the game-change that they bring. Adopting these practices, whether a product is safety- or security-critical, can make a night-and-day difference in the maintainability and robustness of code. Clean code simplifies the addition of new features, eases product maintenance, and keeps costs and schedule to a minimum—all characteristics that improve your company’s ROI.

Whether a product is safety-critical or not, this is surely an outcome that can only be beneficial to the development team.

This article was originally published on EDN.

Mark Richardson is lead field application engineer at LDRA.

Andrew Banks is technical specialist at LDRA and chairman at the MISRA C Working Group.

Related articles:

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment