Gee Whiz Kids!

by | 23 Mar 2026

Now that Google has completed the Wiz! acquisition I would love to see Google lead the charge to reduce false positives.

The value of a security scanning product can’t be “look how many things we flagged”. It needs to be “look at all the relevant things we flagged”. There are enough real issues without unnecessary false positives.

We call these inert alerts “files on disk” alerts.

At Cohesive Networks we regularly get reports of vulnerabilities in “files on disk” which is a source of unnecessary customer consternation and work on both our parts.  (Note: We use typical scanners as well, so I am not saying don’t use them, but some clean up of their approach would be welcomed.)

Here are some of the flavors of “file on disk” alerts we get:

A) Name match.
On your disk you have a file with a “word” in it, regardless of whether it is the actual vulnerable file/code, finding the word in the name of a some other (non-vulnerable) file on disk gets flagged.

B) Guilt by association match.
There is an exploit in library FOO which is often used with library BAR, so BAR gets flagged even in the absence of the other library.

C) No such runtime.
A library you install installs multiple language versions of their functionality, perhaps including a vulnerable Java library, however there is no Java runtime present to execute the vulnerable file.

D) Naive Linux distribution package name compares.
For example, OpenSSL from Ubuntu gets patched against the latest exploits, but the name might still include the word “openssl” with “1.1” The way Ubuntu patches the full .deb file name does tell you if the version is safe or not, but not via a simple word scan. (Yes people should be on OpenSLL 3.x but over the life of 1.x, this was a great example.)

E) A commit to the Linux kernel.
Since the Linux kernel team became its own CVE Numbering Authority the number of CVEs has gone up by about an order of magnitude. This is a philosophical issue more so than a security issue. Most are not practicably exploitable in a finished product containing the kernel. When one of the full operating system distributions points out an issue, that is more likely the time for action.

F) An actual exploit that MIGHT be able to be exploited.
There is an exploit but there is a very specific usage profile for it to be taken advantage of. In this case you do need vendor attestation that the vulnerable usage profile is not in place, but potentially not a candidate for rushed patches or upgrades.

G) An actual, broadly exploitable RCE against an industry common component.
A real emergency like this can be obscured by incidents of type A-E being over-reported. Items in this category obviously need immediate action. Let’s not obscure with a thicket of less relevant items.

 

When a security team receives these reports with a high number of what I am characterizing as false positives, what are they to do? Many of them want immediate remediation regardless, since how are they to know what to believe? A lot of time and money can be spent on a perceived risk in mere files on disk.

Here is the problem with that interpretation.

If mere “files on disk” are a compromise, then all is lost, everything industry wide is in a state of “compromised”.

All of the package management systems and their “repos” have the vulnerable versions on their disks. These files are not being used. They are “inert”. Just like many of the ones that are showing up in your security scans. In this case they are the files on the disk volumes attached to the servers supporting package installations and patching.

If we are going to claim resources need to be allocated to the above types A-E, then everything is compromised. Their presence on the apt-style repos, yum-style repos, WinGet repos would mean that all of those servers could have been exploited due to the inert file content on their disks. Fun fact, they are not.

I don’t mind getting reports of type “F”. It is incumbent we tell our customer if there is a risk or not, and if so provide patch or upgrade.

I do mind getting a report of numerous vulnerabilities because I have the word “containerd” on my disk and there are CVEs against the golang implementation which is not installed. Or I get flagged for having a python library with the word ‘shlex’ in it because the Rust crate of the same name has a CVE.

What is disappointing is the major scanning products are not new products. They have time in grade, and the expertise to minimize these, but instead are pushing a cognitive load on security teams and their company’s vendors, diminishing resources available for actual risks in an ever more dangerous on-line world.

PS. Regardless of “real” or not, to deal with these false positives efficiently it really helps to use a SBOM system for quick compare of the
flagged item versus what is actually in your product.