Here is how engineers can identify the threats at each step of the IC production lifecycle and subsequently mitigate them.
Today, supply chain security is a hot topic. Chip venders are particularly concerned because they supply the components that are most vulnerable to information extraction and manipulation. At the same time, original equipment manufacturers (OEMs) have a vested interest in understanding the risks in their chip vendors’ supply chains and how those risks affect their end-products. So, while security threats exist on both sides of product development, risks specific to OEM lifecycle stages are also an issue.
This two-part series will look at the interrelationship between silicon vendors and OEMs, and how they must work together to protect vulnerabilities in all manufacturing stages. The first article identifies the threats at each step of the IC production lifecycle and describes how to mitigate them. The second article focuses on OEM-specific security risks and describes how the burden of responsibility lies with end-product manufacturers as well as silicon vendors. These articles will show that OEMs and chip vendors can prevent the bulk of security attacks by taking ownership of the risks in each of their production stages.
Security threats at every stage of IC lifecycle
Whether due to an intentional act on the part of the foundry or infiltration by a bad actor, each stage of the IC lifecycle presents multiple threats that can put the final product at risk.
The manufacturing lifecycle consists of the following steps, also shown in Figure 1.
Figure 1 The above synopsis shows the six key stages of IC Lifecycle development. Source: Silicon Labs
IC development begins with the fabrication stage, in which the device is physically manufactured in a foundry. The IC’s ROM is programmed at this time, but all other memories—OTP, flash and RAM—are unprogrammed. The next step is the probe test, in which the IC is initially tested for functionality before it’s cut from the silicon wafer. At this stage, no permanent configuration is installed, and any data inserted during manufacturing will be erased as part of the testing procedure.
In the package assembly stage, the individual die is put into a package. This step is a completely mechanical process, and no programming or testing is performed. Once in a package, the ICs undergo a package test, often called the “final test” by silicon vendors as it’s the final test they run.
This test has a number of purposes: it looks for defects in the assembly process, inspects the device for parametric issues such as excessive power consumption or out-of-specification characteristics. Moreover, it initializes the device with any silicon vendor-provided data. Devices are then sold to OEMs who perform board assembly, where the devices are installed into a system. Next, in a board test, the ICs receive their final testing, configuration, and programming.
With a basic understanding of the IC lifecycles steps, we can now dive into the threats present in each one. As we discuss these steps, we’ll use Silicon Labs’ manufacturing lifecycle as the model for IC development. Other semiconductor vendors will be substantially similar, so this overview should apply to most vendors.
Attacks targeting this step are extremely unlikely as they are high cost and require generating at least one new mask set, an in-depth analysis of the device, and a high degree of expertise. In addition, any attack at this step can’t easily target a specific end-product. Because foundries make multiple wafers, each with thousands of devices on them, attackers have no way of knowing which devices will end up in which end-products. Also, any modification made to the design at this stage of development will affect every copy of that manufactured device, making it much easier to detect. However, vulnerabilities at this stage do pose security risks, as described in the following subsections.
One possible attack vector for foundries is to access confidential information that puts the end-product at risk. For example, if the IC has a symmetric or private key in ROM—or hard-coded into other hardware like a register—the foundry can easily extract that value. It’s also important to remember that any data easily extractable from the design is also obtainable from a physical copy of the device through reverse engineering. For example, some laboratories will de-process and extract ROM data for less than $10k, which may be more cost-effective than getting access to layouts from the foundry. To avoid this issue, well-designed products should never include secret information.
A more plausible threat is the foundry modifying the device to introduce an exploitable defect. This modification could include modifying the contents of ROM or logic to change the device’s behavior or introduce additional functionality. While these sorts of changes are difficult and costly, basic manipulations are well within the capability of an attacker that can compromise a foundry.
A good countermeasure to this attack is performing sampling tests to verify the function of random devices off the production line. For example, Silicon Labs can pull a few samples at random every year and run tests to verify the contents of ROM, verify the logic, and test other functions. If the foundry introduced a change, these tests would fail. While it may be possible to introduce a logic change that would not be caught by this testing, the effect of such a change would be severely constrained, possibly to the point of uselessness. Performing sample testing at a trusted site—corporate HQ, for example—greatly minimizes the risk of a foundry introducing a change to the hardware and successfully compromising the testing intended to detect it.
Future improvements in tampering detection, such as machine learning based image analysis, may further protect against unauthorized modifications to the die, but these options are not available today.
In this type of attack, the foundry overproduces devices that are then sold as legitimate. For example, an attacker could overproduce parts with modified ROM and sell them to an OEM as legitimate, bypassing the testing intended to catch such changes. This approach allows the attacker to target OEMs directly since the devices no longer flow through the vendor’s supply chain.
The best way to prevent overproduction is to provision cryptographic credentials at package test. OEMs can then check these credentials to ensure they received a genuine device from that vendor. Although the foundry could produce a physically identical device, they would be unable to generate valid credentials for it, and any fakes would be detected by the OEM.
This mitigation requires standard, unlocked devices to hold a secret key securely. Silicon Labs offers Vault-High EFR products with this feature. OEMs that use devices with a lower degree of security, such as Vault-Mid devices instead of Vault-High devices, can achieve a similar effect—albeit at a lower level of protection—by using a custom programming service like Silicon Labs’ custom part manufacturing service (CPMS). In this case, access to device contents is locked prior to shipping so that a secret key can be stored confidentially in non-volatile memory (NVM).
Device analysis is the most realistic threat present in the foundry. Devices are manufactured in an insecure, unlocked state that provides access to logic and systems not available in the finished products. While access to blank open parts doesn’t present a direct security threat, attackers can take advantage of this opportunity to analyze the devices and look for weaknesses that could be exploited on a configured and locked part.
While this threat is present during fabrication, it is more likely to occur at the assembly step and is discussed in more detail below.
An exploit targeting probe test costs less than that of making modifications in the foundry as it only requires the test program or tester to be compromised. However, as with attacks targeting fabrication, probe test attacks are systemic and can’t easily target a specific end-product or OEM.
Malicious code injection
An attacker may attempt to inject malicious code onto the device during probe test. However, any content injected at this step will either be erased at package test or cause the package test to fail when the vendor can’t program the correct content. In addition, secure boot will prevent any attempt to install unauthorized code once enabled in package test. This threat is not realistic for well-implemented products and production lifecycles.
In theory, an attacker with control over a tester at probe test could perform device analysis similar to what can be achieved by overproduction in the foundry. However, it is more likely that an attacker would attempt to gain this access in the assembly step.
Since this is the step in which the vendor places devices into their final form factor, package assembly is the most likely place for an attacker to steal blank, open devices for replication and analysis purposes.
Figure 2 Chipmakers must ensure to mitigate the risks associated with open samples. Source: Silicon Labs
The goal in stealing blank devices is to obtain open samples, configure them in a way advantageous to the attacker, and then deliver them to a target OEM as legitimate devices. This strategy allows a specific OEM or product to be targeted and bypasses the vendor’s final test, which would otherwise overwrite or detect the modifications.
Unlike the fabrication step, where there is no input quantity, the assembly site receives and produces a known number of ICs, so any significant theft should be easily detected by comparing those numbers.
And, as with fabrication, we can prevent stolen open devices from being passed off as genuine by programming cryptographic credentials at package test. For example, all EFR Vault-High products are provisioned with cryptographic credentials for this purpose, and CPMS is offered to provision credentials on devices without the Vault-High feature set.
The most obvious example of device analysis at this stage would be an attacker stealing a device before the secure engine (SE) was programmed and locked and using that access to understand how the mailbox mechanism and SE hardware function. In theory, that access could allow the attacker to identify a weakness that could then be turned into an exploit against a locked SE.
Device analysis requires obtaining only a small number of devices from the assembly site. While limiting access to open samples is a desirable layer of defense, it should be assumed that an attacker will gain access to open samples at some point. ICs should be designed in such a way that this does not create unacceptable risk or undermine the security of the system.
Silicon Labs takes several steps to mitigate the risk associated with open samples, including audits of assembly contractors’ processes and procedures, tracking any open devices used internally for development, and destruction of open samples when no longer needed. Additionally, products are designed to remain secure even against attackers with open samples and full access to the design. Finally, both internal and third-party penetration testing is performed by individuals with open samples, full knowledge of the design, and a high degree of expertise.
Today, the risk presented by modifying the package with altered or additional components is fairly small. There are several reasons why this is not a high-risk attack vector. First, assembly is far enough removed from end systems, so targeting a particular piece of end equipment like a door lock is difficult. In addition, space constrains make it difficult to hide any additions to an IC. Finally, a sampling test can detect any large-scale attack by x-raying a few units to identify unexpected components.
There are many general approaches to making exploits of the package test stage more difficult, including restricting access to testing sites and maintaining logging control. For a start, normal security practices for networks and PCs should be observed. For example, test systems should not have direct connections to the Internet and should not use communicable login credentials. Vendors should conduct periodic reviews of these processes and systems to ensure they are not changed or exploited. These simple actions can make it much more difficult for an attacker to gain access to a test system.
Figure 3 At package test sites, it’s critical to have trusted machines that operate outside the influence of other vendors. Source: Silicon Labs
Silicon Labs deploys several techniques to provide enhanced security at its testing sites. For instance, the company consigns security-hardened testers that are not shared with other vendors. In addition, Silicon Labs provides test sites with a trusted machine that operates outside their influence and can be used to monitor and support the testers. This machine resides in the sever room, has no local interfaces, and contains hardware designed to withstand physical attack.
Malicious code injection
The most obvious method of attack at this stage is to inject malicious code that edits the device’s behavior. Code provided by the silicon vendor at this step, such as SE firmware, will not be overwritten by the board test; however, if it can be altered, compromised devices can be shipped.
This risk can be eliminated by using secure boot with a public key in ROM, which ensures that only correctly signed code runs. If an attacker attempts to modify the programmed code, it will no longer be properly signed, and the device will cease to function. Since the key used for signing is stored in a hardware security module (HSM), tightly controlled, and not available in any production environment, it is extremely unlikely that an attacker will be able to generate a proper signature for a manipulated image.
Due to testing requirements, secure boot is enabled after firmware is programmed. This production flow makes it possible, though complex, for an attacker to compromise a package test such that it programs a malicious image and leaves secure boot disabled.
To eliminate this threat entirely, ICs should allow OEMs to verify the device’s state outside the influence of any programmed code. For example, Silicon Labs devices have a hardware register that indicates if the SE subsystem is locked and allows an OEM application or test program to verify that the device is configured correctly. With this verification in place, the OEM can detect any package test alterations or malicious code, and discard these altered parts.
Extraction of confidential information
If any confidential information is programmed during package test, an attacker may seek to gain access by compromising the test system. For standard embedded products, keys associated with credentials are the only confidential information present. For devices that generate those keys onboard, extraction will not be possible; but if the test system injects keys, then an attacker can gain the access needed to see those values as they are programmed.
Devices customized with confidential, OEM-specific information are particularly at risk for this type of attack. However, chip vendors that follow the recommendations in this article series can provide a level of security that meets or exceeds those found in most board test facilities. It’s always recommended that OEMs collaborate with silicon vendors to determine the best solution for device programming.
Supply chain security requires a layered approach
Security is a system-level problem that requires vendors and OEMs to work together as trusted partners when developing connected products. As we move forward in building the Internet of Things (IoT), it’s critically important for OEMs to evaluate how successfully their chip vendors handle security in their manufacturing lifecycle. When deciding which vendors to use, OEMs should consider the level of transparency and thoroughness a vendor provides, along with the cost and performance of their ICs.
Talk to your vendors today about their manufacturing security and how they can help ensure the security of your end-products.
This article was originally published on EDN.
Joshua Norem is a senior systems engineer at Silicon Labs.