Apr 5, 2021 5 min read

Dealing with False Positives

Blog post outlining ways in which to deal with false positives.

Dealing with False Positives
It’s better to miss some findings than to destroy trust in our tooling by flooding our engineers with useless noise.

I saw this recently in a blog post outlining how a company approached their application security. To me this was slightly concerning to read. The purpose of this blog post is to share my thoughts on the above statement, and why I personally view it as a dangerous tactic.

Disclaimer: This is my personal opinion based off past experience. It doesn't mean that I'm correct.

False Positives

False positives are the bane of any team deal with scan or tooling results. They take time and effort to process. More false positives can lead to increased frustrations by teams. In an application security setting this often the development teams who are left to deal with the results of these scans. And this is my first point to raise. If development teams are left to triage these issues, then I personally feal the process is not correct to begin with. Reviewing security results requires some form of security knowledge. Expecting someone with limited to no knowledge is certainly going to lead to frustrations. Ideally you should have a team with the appropriate knowledge reviewing and triaging these results before they get to the development teams.  This helps to ensure that development teams are dealing with known issues as well as knowing the priority of those issues. Have a formal (and documented) triage process. Making this process as open and transparent as possible. Security teams need to work with the development teams, and visa versa.

Initial Flood

All too often when onboarding a tool, you find that you might see an initial wave of issues, often with false positives. It is important to remember this is typically only a part of getting up and running. Ideally, if things are done correctly, you should NOT be facing a constant barrage of incorrect findings. If you have done things correctly (to the points below), you should find that after reviewing the initial set of results, you should hopefully have very few to review. It is important to keep this in mind, since it is often easy to get overwhelmed when onboarding a new tool.

Default Configuration

Another common mistake which is often done is to simply run the tool out of the box. Only with a very few tools will you have much success with this approach. You need to invest the time and effort to tailor these tools to your environment. The more effort that you put in, the more likely you are to see better results. By tuning the tool to your environment, the results which it produces should be a lot more relevant and ultimately result in less false positives.

Only Takes One

It can be that single issue which the tool did not report, which could lead to an incident. At the end of the day security is all about managing risk. You might decide not to address an issue since you either deem it low risk or have appropriate mitigations in place. But if you are knowingly ignoring some issues, you have no means of tracking the risk associated with those issue. You can't fix or protect against what you don't know about! This to me is one of the most dangerous points about the above statement. It can just take a single issue/vulnerability in order for your organisation to suffer a security incident. To me, knowingly choosing to ignore some issues is a risk in itself.

Take this scenario as an exmple. You decide that log injection flaws as simply a low risk vulnerability which you feel is creating too much noise. So instead of documenting instances of it, you decide to ignore it completely. A few months later you then look to some log ingestion solution. The problem is that this solution has a remote code execution vulnerability in it since it does not validate/escape its input sufficiently. But this is deemed a lower risk since the tool is internally accessible only. The only problem now is that because you have a log injection vulnerability on an external facing application, an attacker now has the ability to gain access to this log ingestion tool. Worse yet, you have no idea that this risk even exists!

Vulnerabilities are often chained together, and some minor issues can suddenly become a vital part of that chain. All too often vulnerabilities are viewed in isolation, and unless you are documenting all known issues you will not be able to view them in context of other vulnerabilities.

Reliance on Tools

As we find development pipelines moving faster and faster, we are going to have to rely on tools more and more. Expecting humans to pick up on these issues is going to become increasingly difficult. Which is why it is so important to have the process and configuration of the tool in place before you start using it and it starts to create unnecessary noise.

Additionally, you might find that you are in fact using the wrong tool for the job, or the tool is simply not up to standards. This is why it is incredibly important to review and test tools before you decide to use them. All too often teams may see some shinny blinky tool or box and jump headfirst into using it. First spend time creating a shortlist of tools, and then review each one in that shortlist. This should ideally involve running a trial or proof of concept to see how the results fit your organisation.

Centralisation

Another common problem I see is that teams view and treat results of each tool in isolation. There are tools out there, such as DefectDOJO (which is completely free), that allow you to correlate results into a single location. This helps from numerous angles including:

  • Central view of risk
  • Central management and control of reported issues
  • More efficient to manage and control
  • One tool to look at

Trying to manage and control issues in each tool individually is likely going to lead to frustration, and increased workload. Having all results centrally managed means that you can have a team which can help triage these results.

Zero False Positives

I am always dubious when I hear a tool claiming this. Personally, I accept that tools will likely always report false positives. It's one of the sacrifies which we make when using a tool as opposed to a human. They don't have the ability to be aware of things such as context. I'd much rather dismiss something and know about it, than not knowing about something at all. I've seen such tools in practice before, and I as a security professional, start questioning the results of the tool. This is not a good place to be! I should have confidence in the tool, and it is doing what I implemented it for. What is the point of spending all the effort in setting up, configuring and running a tool if it is not going to do its original intended job? This to me seem counter intuitive.

Conclusion

While I totally agree with the fact development teams should not face a barrage of false positives, I disagree that should come at the cost of knowingly missing some findings. If you do find yourself in a position of facing a barrage of false positives, I would question your process and policy around the use of the tool. Perhaps you are using the wrong tool for the job? Perhaps you have not correctly configured the tool? Do you have team members with appropriate knowledge triaging the reported findings from the tool? Ultimately choosing to ignore some issues, at least in my opinion, is asking for trouble. I feel that EVERY valid issue should be documented somewhere so that it can be tracked. That way it is known about and can be handled appropriately by both the business as well as those teams which are responsible for the security of the system(s) which the issue was found in.

Sean Wright
Sean Wright
Experienced application security engineer with an origin as a software developer. Primarily focused on web-based application security with a special interest in TLS and supply chain related subjects.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Sean Wright.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.