Beyond SAST: A New Approach to Identifying and Testing Undiagnosed Cybersecurity Vulnerabilities

By: 
Andrew V. Jones

October 27, 2016

Static application security testing (SAST) is designed to analyze application source code, byte code and binaries for common vulnerabilities, including coding and design conditions that might lead to potential security vulnerabilities. SAST tools do not execute the code, but instead try to understand what the code is doing "behind the scenes" to identify where errors are. Unfortunately, this type of static analysis has been plagued with the issue of false-positives (when the tool reports a possible vulnerability that is not an actual vulnerability).  There are two problems here: one is the inaccurate modeling of what might be happening or how the computer performs a certain operation, such as calling an external library. The second is in the scalability of the problem, SAST tools might make an assumption or simplification on the problem to make it more scalable; those decisions can lead to inaccuracies.

When a SAST tool analyzes an application that interacts with external systems - that is, systems where the source code is not available - the data that flows through the application, from input to output, is impossible to trace. Subsequently, it also becomes impossible to ensure the integrity and security of that data. There is also a desire to minimize false negatives at the expense of false positives; while a real error might "drown" in the number of false positives, that is better than a genuine error being errantly suppressed. Both of these can lead to false positives existing in the results.

As a result, many SAST tools only help zero in on at-risk portions of the code to help developers find flaws more efficiently, rather than provide a tool that finds the actual security issues automatically. This can lead to time-consuming processes as well as incomplete analysis, both of which can be detrimental in the software development world.

Cybersecurity vulnerabilities are problematic in any situation, but with the growing intersection of technology and the physical world, such as in Internet of Things (IoT) applications, safety becomes an issue when security is compromised. IoT/network-connected devices do not use the traditional web stack (e.g., PHP, Apache, etc.), where security mitigations are commonly focused. As a result, serious issues may only be exploitable when run on the physical device. This makes it hard to apply existing, “webstyle” penetration tools (such as tools targeting HTTP interfaces or SQL injection attacks) to such embedded devices, given their development is typically done in C or C++.

To address this, new dynamic unit testing methods are emerging that actually expose defects in software by generating a test case and confirming exploitability. Utilizing MITRE’s classification of a common weakness enumeration (CWE), the approach uses automated software testing methods to interrogate an application’s software code and identify possible weaknesses. Once a potential CWE is found, a test exploiting the identified issue is generated and executed. After execution, test tools can analyze the execution trace and decide if the potential CWE is a genuine threat. That issue can then be classified as a common vulnerability and exposure (CVE).

The approach is based on the “synthesis” of executions leading to specific software issues (i.e., the automatic construction of a dynamic test exploiting a given vulnerability) allowing for the identification and automatic testing of undiagnosed cybersecurity vulnerabilities. The construction of this exploit is then paired with its dynamic execution to determine if the vulnerability appears. This type of dynamic testing method performs an up-front analysis of the code to detect potential issues (much like a static analyzer), which could actually contain false positives. However, once a potential issue has been identified, it also attempts to perform "automatic exploit construction." 

Unlike static analysis-based approaches, this type of software security testing will only flag an issue if it is genuinely exploitable, mitigating the issues of false-positives. The generation of test artifacts allows for their future re-execution to demonstrate the mitigation of a potential issue after software redesign.

Tags: 
Security