I watched an episode of Application Security Weekly with Emily Fox about Vulnerability Management. As is common now, the hosts and guest pointed out that there are too many known vulnerabilities, 3-4% of them are actually exploited, and therefore not all vulnerabilities need to be fixed. And in order to understand what exactly does not need to be fixed, you need to
🔹 Take into account security layers that prevent exploitation of vulnerabilities.
🔹 Consider how the risk of exploitation and the type of vulnerable asset are related.
🔹 Assess the likelihood of exploitation in the context of a specific organization.
The words here seem to be all good, and I would even agree with them. But where to find reliable sources of information (about vulnerabilities, infrastructure, security mechanisms) and tools for processing them? And how can we make it all work very reliably?
So that we can give a hand to cut off that this vulnerability 100% does not need to be fixed and this vulnerability will never be actively exploited in attacks. 🙋♂️ And do this not just for one vulnerability, but en masse. Are there any brave souls with extra hands? IMHO, if you are not ready to do this, then you should not argue that some vulnerabilities can be left unfixed.
If there is a vulnerability (even potentially) and it can be fixed by an update, then it SHOULD be fixed by an update. As planned or faster than planned. But everything needs to be fixed. At the same time, getting rid of vulnerable assets, software, components, images is quite a good way to fix it. The smaller the attack surface, the better. If updating for some reason is difficult and painful, then first of all you need to resolve this issue. Why is this difficult and painful? What’s wrong with the organization’s basic processes that we can’t do it? Maybe we need to look towards better architecture?
This is better than making unreliable assumptions that perhaps this vulnerability is not critical enough to be fixed. Because, as a rule, we know practically nothing about these vulnerabilities: today it is unexploitable, but tomorrow it will become exploitable, and the day after tomorrow all script kiddies will exploit it. It is possible that this vulnerability has been actively used in targeted attacks for several years now. Who can say that this is not the case?
It is very symptomatic, by the way, that in this episode it was recommended to use EPSS to select the most potentially dangerous vulnerabilities. 🤦♂️ A tool that, to my deep regret, simply does not work and shows low values for the probability of an exploit appearing for actively exploited vulnerabilities and high values for those vulnerabilities for which exploits have not appeared for years. 🤷♂️
For example, look at my Vulristics report for the February Microsoft Patch Tuesday. Elevation of Privilege – Windows Kernel (CVE-2024-21338) in CISA KEV, and its EPSS values are low (EPSS Probability is 0.00079, EPSS Percentile is 0.32236). 🤡 You can just as easily read tea leaves, maybe it will be even more effective. Therefore, the rest of the “magic of triage” also causes skepticism.
Again:
🔻 All detected vulnerabilities must be fixed in accordance with the vendor’s recommendations.
🔻 First of all, you need to fix what is actually exploited in attacks or will be exploited in the near future (trending vulnerabilities).