With You all the Way: From Code Quality to Total Security

 

Security is still a very hot topic; it seems like not a week goes by without hearing about how so-and-so company was compromised and data on millions of users was leaked.

Part of the reason that we see such a large amount of security issues is because of the way that we treat security: it’s typically regarded as an afterthought, something that you bolt on to the device at the end of development. However, complex systems, and especially embedded systems, have a large attack surface for an attacker to find a hole in the armor. If you study the way that most hackers try to break into systems, you quickly discover that the favorite tool in their arsenal is to find and exploit flaws in the software of the device.

If software defects are the doors that hackers use, then we need to up our code quality game to address the issue. But how big of a problem is it, and what can we do to fix it?

Code vulnerabilities an easy target for hackers

Poor code quality is actually a widespread problem and there is quite a bit of evidence to support the claim that bad coding practices lead directly to vulnerabilities. While many Software Engineering experts have been preaching this for years, perhaps the first time people really became aware of it was in 2001 when the Code Red worm exploited a buffer overflow attack on Microsoft’s Internet Information Services (IIS). [1] Although the first documented buffer overflow attack was in 1988 on the Unix finger command, the attack was very limited in its ability to affect the general population and therefore wasn’t headline news.

As Code Red caused massive Internet slowdowns and was all over the news, we suddenly saw an increase of buffer overflow attacks everywhere. It seemed like security researchers and hackers alike were finding these bugs in various systems – including embedded systems – everywhere. This attack allows a hacker to run any code they want on an affected system by targeting any code that uses fixed-length buffers to hold text or data. The hacker fills up the buffer space to the max and then writes executable code at the end of the legitimate buffer space. The system under attack then executes the code at the end of the buffer which in many cases can allow the attacker to do anything they want. [2]

The reason this type of attack became urgent was because it wasn’t common coding practice to check and enforce the limits of buffers, but many coding standards like the Common Weakness Enumeration from mitre.org now recommend checking buffers for this type of vulnerability. [3] Sadly, it still isn’t common practice for developers to look for this problem when writing their code. It often takes code analysis tools to notice these issues so that a developer realizes there is an issue and fixes it. A simple code quality improvement like this takes away one of the most common hacker approaches and thus greatly improves the security of the code. Therefore, a good coding practice is to check and enforce the length of buffers in your code.

Not just buffer overflows

However, the problem isn’t just buffer overflows – it’s actually a systemic problem that sloppy coding practices in general lead to a countless of security holes that hackers can utilize to compromise a system. A paper published by the Software Engineering Institute (SEI) puts it in very clear words:

“…Quality performance metrics establish a context for determining very high quality products and predicting safety and security outcomes. Many of the Common Weakness Enumerations (CWEs), such as the improper use of programming language constructs, buffer overflows, and failures to validate input values, can be associated with poor quality coding and development practices. Improving quality is a necessary condition for addressing some software security issues." [4]

The paper goes on to show that security issues – since many of them are caused by buggy software – can be treated like more ordinary coding defects and thus you can apply traditional Quality Assurance techniques to help address at least some of the security issues.

Since normal Software Quality Assurance processes allow you to estimate the number of defects remaining in the system, can you do the same with security vulnerabilities? While the SEI stops short of confirming a mathematical relationship between code quality and security, they do state that one percent to five percent of software defects are security vulnerabilities and go on to state that their evidence shows that when security vulnerabilities are tracked, they could accurately estimate the level of code quality in the system. [4] This conclusively shows that code quality is a necessary (but not sufficient) condition for security, and really puts the lie to notion that security can be treated as a bolt-on at the end of development. Rather, security has to be threaded through the DNA of your project, from design, to code, and all the way to production.

Coding standards are a great help

Many of the most common security holes are addressed in coding standards like the Common Weakness Enumeration (CWE) from mitre.org and point out additional areas of concern like divide-by-zero, data injection, loop irregularities, null pointer exploits, and string parsing errors. MISRA C and MISRA C++ also promote safe and reliable coding practices to prevent security vulnerabilities from creeping into your code. While these can catch many of the commonly-exploited weaknesses, a developer has to think bigger when they are writing their code: how can a hacker exploit what I just wrote? Where are the holes? Am I making assumptions about what the inputs will look like and how the outputs will be used? A good rule-of-thumb is that if you are making assumptions, then those assumptions should be turned into code that makes sure that what you’re expecting is actually what you are getting. If you don’t do it, then a hacker will do it for you.

But what about open source software? The typical argument for using open source components in a design relies on the “proven in use” argument: so many people use it, it must be good. That same paper from the SEI has something to say about this too:

“One of the benefits touted of Open Source, besides being free, has been the assumption that ‘having many sets of eyes on the source code means security problems can be spotted quickly and anyone can fix bugs; you're not reliant on a vendor’. However, the reality is that without a disciplined and consistent focus on defect removal, security bugs and other bugs will be in the code." [4]

In other words, the SEI says that the “proven in use” argument means nothing and calls to mind the story about Anybody, Somebody, Nobody, and Everybody as it regards applying quality assurance to open source code. Moreover, your testing isn’t enough to prove out the code. The SEI says that code quality standards like the CWE find issues in your code that typically never get detected in standard testing and usually only are found when hackers exploit the vulnerability. [4] To prove that point, in May 2020 researchers from Purdue University demonstrated 26 vulnerabilities in open source USB stacks that are used in Linux, macOS, Windows, and FreeBSD. [5] When it comes to security, code quality is key and all code matters.

Code analysis tools help to comply with standards

What can we do about addressing code quality so we can improve our application’s security? The easy answer is to use code analysis tools, which come in two basic flavors, static analysis which looks only at the source code of the application and runtime (or dynamic) analysis which instruments the code looking for weaknesses like null pointers and data injection methods.

High-quality code analysis tools include checks for CWE, MISRA, and CERT C. CERT C is another coding standard designed to promote secure coding practices. These three rulesets together form a great combination of coding practices that promote security: some rulesets overlap with others, but also provide some unique features to help ensure your code has a high degree of security. Using these standards also helps to ensure that you have the best possible code quality and might even find some latent defects in your code.

High-quality code is secure code

You can’t have security unless you have code quality, and you can’t pass the code quality buck onto someone else because their bugs are likely to become your security nightmare. But there is hope because code analysis tools can help you quickly identify issues before they bite you. The road to security always passes through the gateway of code quality.

Article written by Shawn Prestridge, Industry profile and US FAE Team Manager at IAR Systems

References

[1] https://www.caida.org/research/security/code-red/

[2] https://malware.wikia.org/wiki/Buffer_overflow

[3] https://cwe.mitre.org/data/definitions/121.html

[4] https://resources.sei.cmu.edu/asset_files/TechnicalNote/2014_004_001_428597.pdf

[5] https://www.techradar.com/news/usb-systems-may-have-some-serious-security-flaws-especially-on-linux