What’s wrong with patch-based Vulnerability Management checks?

My last post about Guinea Pigs and Vulnerability Management products may seem unconvincing without some examples. So, let’s review one. It’s a common problem that exists among nearly all VM vendors, I will demonstrate it on Tenable Nessus.

If you perform vulnerability scans, you most likely seen these pretty huge checks in your scan results like “KB4462917: Winsdows 10 Version 1607 and Windows Server 2016 October 2018 Security Update“. This particular Nessus plugin detects 23 CVEs at once.

What's wrong with patch-centric Vulnerability Management?

And, as you can see, it has formalized “Risk Information” data in the right column. There is only one CVSS score and vector, one CPE, one exploitability flag, one criticality level. Probably because of architectural limitations of the scanner. So, two very simple questions:

  • for which CVE (of these 23) is this formalized Risk Information block?
  • for which CVE (of these 23) exploit is available?

Ok, maybe they show CVSS for the most critical (by their logic) CVE. Maybe they somehow combine this parameter from data for different CVEs. But in most cases this will be inaccurate. Risk information data for every of these 23 vulnerabilities should be presented independently.

As you can see on the screenshot, one of these vulnerabilities is RCE the other is Information Disclosure. Vulnerability Management solution tells us that there is an exploit. Is this exploit for RCE or DoS? You should agree, that it can be crucial for vulnerability prioritization. And more than this, in the example there are 7 different RCEs in Internet Explorer, MSXML parser, Windows Hyper-V, etc. All this mean different attack scenarios. How is it possible to show it Vulnerability Scanner like one entity with one CVSS and exploitability flag? What can the user get from this? How to search in all this?

It’s not an idle question, because Vulnerability Management vendors use this formalized data as a base for further analysis. So, when I see that some traditional VM vendor presents yet another SEIM-like functionality with searches, prioritization and dashboards, it doesn’t really impress me, because I know that the underlying data might be too trashy for actual decision making.

How does such plugins appear in VM vendor’s Knowledge Bases?

It’s not a secret that the most of vulnerabilities (I think it’s fair to say that more than 80%) described in the VM vendor’s Knowledge Base come directly from the software vendors. In most cases in somehow automated manner.

It looks like this:

  1. Some Software Vendor publishes a patch description, usually called “security bulletin”, describing the vulnerabilities found in the software: what are they about and how they can be fixed. The examples of such bulletins for different vendors: Microsoft MS and KB, RedHat RHSA and CESA, Ubuntu USN, Cisco SA, etc. Tones of them. Read more in “Vulnerability Databases: Classification and Registry“.
  2. The VM vendor figures out how to check that vulnerabilities described in the bulletin are fixed on a host. In most cases it means to check that some software version or/and patch(es) were installed during some authenticated scan. For some systems it’s relatively easy, you even can easily do it by yourself, see”Vulnerability Assessment without Vulnerability Scanner“. For others, it may be tricky because of cumulative patches and weird secretive policies of software vendors. But anyway, if the vulnerabilities are NOT fixed on a host it shows vulnerability description the security bulletin as is.

This scheme is convenient for Software Vendors, because they think in the terms of bugs and patches. It convenient for VM vendors as well, because they use data as is, and, technically, they do everything right. You don’t have a patch? This means that you have vulnerabilities. Here is a list. Patch it.

It’s only inconvenient for the end-user, because the data he gets is not actionable. He will need to spend his time on figuring what actual vulnerabilities are there, think about attack vectors and real exploitability. And they should use these barely useful reports to convince IT to make patching.

How to make it better?

It’s necessary to go much deeper in analyzing vulnerability descriptions, address each Vulnerability independently and how they can be combined by an attacker. For VM vendors this means that they should start to process descriptive data and store it differently. For example, in case of Nessus it might be helpful to add more tags to NASL and describe vulnerability as a complex structure. This certainly will require changes in all enterprise-level products as well, as Security Center or Tenable.io.

I write all this not to criticize any vendor, especially not to criticize Tenable. BTW, Tenable became the best Vulnerability Management vendor of 2018 according to the poll in my Telegram Channel. And it seems quite fair to me. But there is a status quo: every major VM vendor on the market work with vulnerability data like this. And they won’t make massive changes until the customers signal that this is important.

What can we do right now?

Until then it’s possible to do this additional processing with your own scripts by analyzing the plugin descriptions from Nessus2 reports, and bringing additional data from NVD and Vulners.

Or you can buy a separate a solution for this, like Kenna, which has it’s own issues, but way better than nothing. But for me the situation seem weird a bit, when you have to

  • buy a vulnerability management solution to detect the issue
  • buy another vulnerability management solution to make the output from the first one actionable

Maybe it’s good for the seller, but definitely not good for the customer. 🙂 And it basically means that the first VM solution could do it’s job much better.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.