Here are the common cybersecurity oversights that could compromise your critical applications and data.
It’s an unfortunate truth of embedded systems: once deployed and out in the wild, they are never 100% safe, especially as the world becomes more connected. This reality is further compounded by of the historically lax approach to security engineering applied to these systems. Most devices focus on the device-specific software and often ignore the OS and lower-level components.
With billions of embedded systems in use globally and increasingly becoming connected, there is a huge incentive for attackers to devise new and insidious ways to extract sensitive data and/or repurpose fielded devices for personal gain.
Additionally, software both for the device itself and specific functionality will typically need updates over time to combat new security threats. Criminal hackers are constantly developing new recipes for devastating attacks, such as the attack on a water treatment plant in Florida earlier this year.
Let’s examine 10 fatal security mistakes that could be compromising embedded systems within our business, financial, and critical infrastructure.
1. Leaving Your Sensitive Data and Applications in the Clear
According to the MITRE Common Weaknesses Enumeration list, criminals can read, extract, and exploit data and applications you leave in cleartext. Encrypting the data and applications is not sufficient; you also have to worry about where and how encryption keys are stored and what algorithms are used. It’s an error to assume that we can just use SSL/TLS to protect data in transit as we often forget that the same data is stored locally on the edge device and in the cloud platform.
It’s easy for cybercriminals to grab data in the clear and sell it on the dark web or post it on public text storage sites.
Furthermore, criminals can readily reverse engineer and maliciously modify cleartext applications. Criminal hackers do it to expose secret, sensitive algorithms or cause your code to execute in unintended ways.
2. Starting Your System Without Secure, Authenticated Boot
As covered by ZDNet, a software engineer uncovered a bootloader vulnerability in LG Android smartphones leaving the devices susceptible to a cold boot attack.
Cybercriminals can root a device that is not booted with a secure boot process. In addition, they can alter your bootloader, OS, UEFI BIOS, and hardware/software configurations or substitute them with malicious versions.
What’s worse, some of those malicious modifications can persist even through a complete re-install of your host operating system.
Apart from an authenticated and secure boot implementation with foundations in hardware, you can’t protect your boot sequence against attacks that tamper with bootloaders and later compromise the cyber resiliency of your system. Further, a lack of secure boot also enables a wide variety of attacks and device repurposing.
3. Letting Unauthorized Software Access Unauthorized Components
Criminal hackers can exploit vulnerabilities or implicit trust in one component to leak critical information such as memory contents and addresses, thereby enabling a wide variety of second order attacks.
Without restricting access to only those components necessary to get the job done, you leave unintended openings that allow the attacker to pivot from one component to another. For example, two pieces of software that share a hard drive or memory component could communicate through that hardware (possibly by leveraging a variety of side channel attacks), leveraging a vulnerability in one to access another.
4. Ignoring or Misconfiguring Containerization or Isolation Mechanisms
According to NIST CVE-2021-21284, a vulnerable Docker Engine function called “–userns-remap” lets attackers escalate privileges and write arbitrary files as the root user. In this case, the weak function feature broke the container isolation. However, it’s not uncommon for containers to be executed as a privileged container providing unfettered access to the host system, its files, and other containers.
Misconfiguring software containers or ignoring software isolation could allow a cybercriminal to escalate privileges and gain unauthorized root-level access (complete control) over the system. So called container and/or VM breakouts enable an attacker to introspect and modify the contents of other containers or guests on the system and potentially interact with cloud-based services in unintended ways.
5. Leaving Too Much Attack Surface
The more you bloat software with excessive interfaces and features, the more you increase your attack surface – bugs, vulnerabilities, and holes an attacker can use to exploit your system. Similarly, the more libraries you include in your OS image or application packages, the more your attack surface increases and raises the burden of updates, patching, and addressing various possible security vulnerabilities. Criminal hackers need only scan your systems to know what to attack.
By taking a minimalist approach to software development, by only adding functionality that is required to achieve the software’s mission, it will be more difficult for cyber attackers to exploit your code for their own advantage.
6. Handing Out Unrestricted Privilege
When you give an application more access (discretionary access controls, system-level privileges, namespaces, etc.) than required, cybercriminals can exploit that access to unlock privileges and manipulate your software. Once attackers leverage excess privileges to gain administrative rights, they could move laterally across the network, gaining access to the cloud infrastructure, from there may gain access to all devices, whereby performing a denial of service attack, degrading performance, injecting malware, etc.
There are simple, yet relatively unused mechanisms that can be used for restricting access to various privileges. In the simplest of cases in a Linux environment, we can start with standard User / Group access controls, add in the ability to drop capabilities after use (i.e. such that a privileged port can be used to start a service), and then move into various other security option sets within the Linux environment.
While we’re focused specifically on unrestricted privileges on the edge device, the same concept applies to your entire DevOps pipeline, cloud infrastructure, and enterprise network.
7. Assumed Trust and Allowing Unauthenticated Communications
In this ZDNet story, researchers shared a successful theoretical attack called Raccoon on the TLS v1.2 protocol, which compromises sensitive and otherwise protected authenticated communications. Of course, the industry still considers TLS v1.2 relatively secure. But the success of the attack makes a valid point: You must be proactive in checking and confirming secure protocols, only extending trust to authenticated users and systems, and only communicating with those users and systems using encrypted channels. There is, of course, another implicit assumption that our device itself can be trusted. If the device we’re communicating and exchanging data with is not trusted, then we have to address a variety of other concerns such as local data encryption, hardware-based key management, and secure boot.
Using default settings or vulnerable protocols welcomes unauthorized access and invites malicious traffic into your systems.
8. Failure to Check Inputs
When a developer doesn’t check the inputs, an attacker can introduce malformed data into the system, causing downstream components to malfunction. Common attacks using malformed data include SQL Injections and Buffer Overflows.
Programmers check inputs across all types of data from web form submissions to RF capture to make sure trusted users send expected data to their software. Expected data includes both the format and contents of data. Input validation keeps untrusted users from sending unexpected data with malicious intent and assumes that all data which doesn’t originate from the application itself is untrusted.
9. Missed Opportunities in Secure Coding
Vulnerable coding practices let software flaws progress through the development process undetected and, even worse, many of these practices and access can lead to attacks. If you don’t use secure development tools and techniques, cybercriminals will routinely break through these holes in your software and potentially leverage your holes to gain access to other networks and organizations.
But with secure coding workflows and automated testing, you can find and fix vulnerabilities early and often in the development pipeline. Just remember: Security and secure development processes are not point solutions and need to be practiced together.
10. Using Hope as a Security Strategy
It takes too long for limited security staff to look through event logs manually, searching for signs of attacks and breaches. Instead, you need to take advantage of automated tools that watch your systems continuously and check your records for historical evidence of incursions. Of course, this all assumes we’re logging the right things with sufficient detail to make a determination.
Modern technologies can ingest and audit entire systems log collections. Behavioral tools can recognize suspicious activity inside your networks and software. You can even benefit from malware scans that use current threat signatures to identify attacks. By combining these efforts, you can paint a complete picture of your security status and update it around the clock.
This article was originally published on Embedded.
Michael Mehlberg is the Senior Security Director at Star Lab, Wind River’s Technology Protection and Cybersecurity Group. He has worked in the field of anti-tamper for nearly 20 years and has extensive experience in reverse-engineering, threat modeling, risk assessment, and system security technologies. Michael’s role at Star Lab focuses on sales, marketing, and business development for aerospace and defense in the Washington DC area. Michael holds a bachelor’s degree in computer science from Purdue University.