Tag Archives: VMprocess

September episode of “In The Trend of VM”: 7 CVEs, fake reCAPTCHA, lebanese pagers, VM and IT annual bonuses

September episode of “In The Trend of VM”: 7 CVEs, fake reCAPTCHA, lebanese pagers, VM and IT annual bonuses. Starting this month, we decided to slightly expand the topics of the videos and increase their duration. I cover not only the trending vulnerabilities of September, but also social engineering cases, real-world vulnerability exploitation, and practices of vulnerability management process. At the end we announce a contest of questions about Vulnerability Management with gifts. 🎁

📹 Video “In The Trend of VM” on YouTube
🗞 A post on Habr (rus) a slightly expanded script of the video
🗒 A compact digest on the official PT website

Content:

🔻 00:51 Elevation of Privilege – Windows Installer (CVE-2024-38014) and details about this vulnerability
🔻 02:42 Security Feature Bypass – Windows Mark of the Web “LNK Stomping” (CVE-2024-38217)
🔻 03:50 Spoofing – Windows MSHTML Platform (CVE-2024-43461)
🔻 05:07 Remote Code Execution – VMware vCenter (CVE-2024-38812)
🔻 06:20 Remote Code Execution – Veeam Backup & Replication (CVE-2024-40711), while the video was being edited, data about exploitation in the wild appeared
🔻 08:33 Cross Site Scripting – Roundcube Webmail (CVE-2024-37383)
🔻 09:31 SQL Injection – The Events Calendar plugin for WordPress (CVE-2024-8275)
🔻 10:30 Human vulnerabilities: fake reCAPTCHA
🔻 11:45 Real world vulnerabilities: еxplosions of pagers and other electronic devices in Lebanon and the consequences for the whole world
🔻 14:42 Vulnerability management process practices: tie annual bonuses of IT specialists to meeting SLAs for eliminating vulnerabilities
🔻 16:03 Final and announcement of the contest
🔻 16:24 Backstage

На русском

Ford won’t work?

Ford won't work?

Ford won’t work? There were a lot of comments about “paying vulnerability fixers only when they are in the break room“. I’ll say right away that the post was a joke. Staff motivation is too delicate a topic to give serious recommendations. 🙂

But I will sort out the objections:

🔻 IT staff will sabotage the vulnerability detection process by tweaking host configs. So that the scanner will produce only green reports. But IT staff can do this at any time, and we need to take this into account. 🤷‍♂️

🔻 IT staff will simply turn off hosts. If they can do this without harming the business, that’s great. 👍 And if this will break the production environment, then let them deal with their IT management. 😏

🔻 There is an opinion that the method is good, but only 2% of vulnerabilities used in attack chains need to be fixed. I traditionally DO NOT agree with the possibility of reliably separating these mythical 2% of vulnerabilities. Everything needs to be fixed. 😉

На русском

Vulnerability Remediation using the “Ford Method”

Vulnerability Remediation using the Ford Method

Vulnerability Remediation using the “Ford Method”. There is a popular story in the Russian segment of the Internet. Allegedly, an experiment was carried out at Henry Ford’s plant: conveyor repair workers were paid only for the time they were in the break room. And as soon as the conveyor stopped 🚨 and the repair workers went to fix it, they stopped getting paid. Therefore, they did their work quickly and efficiently, so that they could quickly (and for a long time) return to the break room and start earning money again. 👷‍♂️🪙

I did not find any reliable evidence of this. 🤷‍♂️

But what if the specialists responsible for vulnerability remediation were paid only for the time when vulnerabilities are not detected on their hosts. 🤔 This can have a very positive impact on the speed and quality of remediation. Unsolvable problems will quickly become solvable, and automation of testing and deployment of updates will develop at the fastest pace. 😏

На русском

I also made a meme with the cool Yusuf Dikeç

I also made a meme with the cool Yusuf Dikeç

I also made a meme with the cool Yusuf Dikeç. 😅

🔹 Every vulnerability existing in the infrastructure must be detected.
🔹 For each detected vulnerability, a patching task must be created.

This is the base. And when they tell you that you don’t have to do this because there is some super-modern vulnerability assessment and prioritization tool, you should be skeptical. 😉

На русском

Detection of known (CVE) vulnerabilities without authentication (in Pentest mode): overkill or necessity? There is an opinion that when detecting vulnerabilities in internal infrastructure, scanning without authentication is not necessary at all

Detection of known (CVE) vulnerabilities without authentication (in Pentest mode): overkill or necessity? There is an opinion that when detecting vulnerabilities in internal infrastructure, scanning without authentication is not necessary at all

Detection of known (CVE) vulnerabilities without authentication (in Pentest mode): overkill or necessity? There is an opinion that when detecting vulnerabilities in internal infrastructure, scanning without authentication is not necessary at all. That it is enough to install agents on the hosts. And those hosts where agents cannot be installed, for example network devices, just need to be scanned with authentication. They say scans without authentication are always less reliable than scans with authentication, and they are needed only for perimeter scanning or primary network inventory. In my opinion, this is not completely correct. Scanning without authentication for known vulnerabilities is mandatory, especially when the target is a host running a web application.

And this is due to the peculiarities of detecting vulnerabilities during scanning with authentication. Let’s take Linux hosts. Typically, VM vendors when scanning Linux hosts with authentication, limit themselves to detecting vulnerabilities in packages from the official Linux vendor repository. 🤷‍♂️ Simply because these vulnerabilities are described in publicly available security bulletins or even as formalized OVAL content. It’s convenient. If you have learned to work with such content, you can check the box that the Linux distribution is supported by the VM solution. What about vulnerabilities for software that is not in the official Linux vendor repository? This is where things get more complicated.

This software can be installed:

🔹 From a connected third-party Linux software repository
🔹 From a package (made by some vendor or selfbuilt) of the standard package system for this Linux distro (deb, rpm), brought to the host manually
🔹 From alternative packages for software distribution (snap, flatpak, appimage, etc.)
🔹 From module distribution tools (pip, conda, npm, etc.)
🔹 From a container image (docker, podman, etc.)
🔹 From software source codes; the software can be built directly on the target host or can be transferred there as binary files.

Ideally, no matter how the software is installed on a host, a vulnerability scanner should correctly detect that software installation, determine the version, and identify associated vulnerabilities based on the version. 🧙‍♂️ But in practice, due to the fact that there are many ways to install software, this is a very non-trivial task. 🧐

As a result, we get a situation: let’s say we have some kind of commercial or open source software on a Linux host (Zabbix, GitLab, Confluence, Jira). This software is not easy to reliably find simply by exploring the host from the inside via SSH. And when looking at the host from the outside, searching for this software is trivial: we scan the ports, find the web-GUI, often find the version directly on the main page and use it to detect vulnerabilities. At the same time, we are not at all dependent on the specific method of installing and running the software on the host. The main thing is that we see the web interface of the application itself. 🤩

Such “external” rules for detecting vulnerabilities are much easier to develop. You can also use ready-made expertise. Fingerprinting to obtain a CPE ID combined with a CPE lookup in NVD is, of course, a dirty path. But this allows you to add vulnerability detection rules in large quantities. 😏 And if you can tweak both the fingerprint and the CPE detection rules, then the number of errors can be reduced to an acceptable level. And if you also add validation of vulnerabilities with an exploitation attempt (for example, using nuclei), then a significant set of vulnerabilities can be detected more than reliably. 😉

So, scanning for known vulnerabilities without authentication (“pentest”) is a must have for internal infrastructure as well, especially for hosts with web applications.

На русском

I watched an episode of Application Security Weekly with Emily Fox about Vulnerability Management

I watched an episode of Application Security Weekly with Emily Fox about Vulnerability Management

I watched an episode of Application Security Weekly with Emily Fox about Vulnerability Management. As is common now, the hosts and guest pointed out that there are too many known vulnerabilities, 3-4% of them are actually exploited, and therefore not all vulnerabilities need to be fixed. And in order to understand what exactly does not need to be fixed, you need to

🔹 Take into account security layers that prevent exploitation of vulnerabilities.
🔹 Consider how the risk of exploitation and the type of vulnerable asset are related.
🔹 Assess the likelihood of exploitation in the context of a specific organization.

The words here seem to be all good, and I would even agree with them. But where to find reliable sources of information (about vulnerabilities, infrastructure, security mechanisms) and tools for processing them? And how can we make it all work very reliably?

So that we can give a hand to cut off that this vulnerability 100% does not need to be fixed and this vulnerability will never be actively exploited in attacks. 🙋‍♂️ And do this not just for one vulnerability, but en masse. Are there any brave souls with extra hands? IMHO, if you are not ready to do this, then you should not argue that some vulnerabilities can be left unfixed.

If there is a vulnerability (even potentially) and it can be fixed by an update, then it SHOULD be fixed by an update. As planned or faster than planned. But everything needs to be fixed. At the same time, getting rid of vulnerable assets, software, components, images is quite a good way to fix it. The smaller the attack surface, the better. If updating for some reason is difficult and painful, then first of all you need to resolve this issue. Why is this difficult and painful? What’s wrong with the organization’s basic processes that we can’t do it? Maybe we need to look towards better architecture?

This is better than making unreliable assumptions that perhaps this vulnerability is not critical enough to be fixed. Because, as a rule, we know practically nothing about these vulnerabilities: today it is unexploitable, but tomorrow it will become exploitable, and the day after tomorrow all script kiddies will exploit it. It is possible that this vulnerability has been actively used in targeted attacks for several years now. Who can say that this is not the case?

It is very symptomatic, by the way, that in this episode it was recommended to use EPSS to select the most potentially dangerous vulnerabilities. 🤦‍♂️ A tool that, to my deep regret, simply does not work and shows low values for the probability of an exploit appearing for actively exploited vulnerabilities and high values for those vulnerabilities for which exploits have not appeared for years. 🤷‍♂️

For example, look at my Vulristics report for the February Microsoft Patch Tuesday. Elevation of Privilege – Windows Kernel (CVE-2024-21338) in CISA KEV, and its EPSS values are low (EPSS Probability is 0.00079, EPSS Percentile is 0.32236). 🤡 You can just as easily read tea leaves, maybe it will be even more effective. Therefore, the rest of the “magic of triage” also causes skepticism.

Again:

🔻 All detected vulnerabilities must be fixed in accordance with the vendor’s recommendations.
🔻 First of all, you need to fix what is actually exploited in attacks or will be exploited in the near future (trending vulnerabilities).

На русском

October 2023: back to Positive Technologies, Vulristics updates, Linux Patch Wednesday, Microsoft Patch Tuesday, PhysTech VM lecture

October 2023: back to Positive Technologies, Vulristics updates, Linux Patch Wednesday, Microsoft Patch Tuesday, PhysTech VM lecture. Hello everyone! October was an interesting and busy month for me. I started a new job, worked on my open source Vulristics project, and analyzed vulnerabilities using it. Especially Linux vulnerabilities as part of my new Linux Patch Wednesday project. And, of course, analyzed Microsoft Patch Tuesday as well. In addition, at the end of October I was a guest lecturer at MIPT/PhysTech university. But first thing first.

Alternative video link (for Russia): https://vk.com/video-149273431_456239138

Continue reading