Detectify Asset Inventory and Monitoring. Continuing the topic about perimeter services. As I mentioned earlier, I don’t think that the external perimeter services should be considered as a fully functional replacement for custom Vulnerability Management processes. I would rather see their results as an additional feed showing the problems your current VM process has. Recently I tested the Detectify’s Asset Inventory (Monitoring) solution, which provides such feed by automatically detecting the issues with your second, third (and more) leveled domains and related web services.
Detectify Asset Inventory screenshot from the official blog
Let say your organization has several second level web domains, over9000 third (and more) level domains, and you even don’t know for what services they are used. This is a normal situation for a large organization. So, you simply add yourorganization.com to Detectify, activate Asset Monitoring, and Detectify automatically discovers third (and more) level domains and related technologies: web services, CMS, JavaScript frameworks and libraries. “It provides thousands of fingerprints and hundreds of tests for stateless vulnerabilities such as code repository exposure for SVN or Git.” This is called fingerprinting.
Zbrunk search launcher and event types statistics. I also changed the priorities. Now I think it would be better not to integrate with Grafana, but to create own dashboards and GUI. And to begin with, I created a simple interface for Searching (and Deleting) events.
upd. 16.12.2019
A small update on Zbrunk. First of all, I created a new API call that returns a list of object types in the database and number of this types for a certain period of time. Without it, debugging was rather inconvenient.
I also added some examples of working with Zbrunk http API from python3. Rewriting them from pure curl was not so trivial. ? Flask is rather moody, so I had to abandon the idea of making requests exactly the same as in Splunk. ? But the differences are cosmetic. It is now assumed that events will be passed to collector in valid json (not as a file with json events separated by ‘\n’). I also send all params of requests as json, not data. But for the compatibility reasons previous curl examples will also work. ?
CentOS 8 with IceWM Desktop Environment. Do you need CentOS 8 with IceWM as desktop Operating System? Most likely not. Especially if you want it to work smoothly without any worries and troubles. However, if you enjoy playing with new desktop environments, you might find it fun.
My reasons were as follows:
I wanted to use the same Linux distribution for server and desktop. Just to minimize possible surprises during the deployment.
I wanted to know what is going on in the RPM-based part of Linux world. The only way to achieve this is to use such distribution every day.
I was tired of problems with the Virtual Box guest additions in CentOS 7 (yes , I run it all as a virtual machine), especially after the 3.10 kernel updates. It was time to move on.
I didn’t want to use Gnome 3, because it’s slow and ugly (however it’s fully functional!). And there were no other DEs in CentOS 8 repositories at that time.
So, I tried CentOS 8 with IceWM (installed it from source) and it worked. IceWM is small, very fast, ascetic, and in some ways quite intuitive. There were some problems with the clipboard (in xTerm and with VBox shared clipboard) and with language switching, but I figured it out and I think that I would probably continue to use it. Below are some notes on how I installed it and resolved the issues.
Barapass console Password Manager. I decided to publish my simple console Password Manager. I called it barapass (github). I’ve been using It for quite some time in Linux and in Windows (in WSL). Probably it will also work natively in Windows and MacOS with minimal fixes, but I haven’t tried it yet.
Why do people use password managers?
Well, with password manager it’s possible to avoid remembering passwords and make them arbitrarily complex and long. And no one will be able to brute force them. Of course, you can simply store passwords in text files, but password managers are better than this because:
no one will see your password over your shoulder;
if an attacker gains access to the files on your host, it won’t possible to read your passwords from the encrypted file or storage (well, ideally);
it’s easier to search for objects in the password manager and copy values from it.
I wanted something as simple as editing a text file with the key-value content. And I wanted it to be stored in a secure manner, and security could be easily checked, “simple and stupid”.
Zbrunk universal data analysis system. Zbrunk project (github) began almost like a joke. And in a way it is? In short, my friends and I decided to make an open-source (MIT license) tool, which will be a kind of alternative to Splunk for some specific tasks.
So, it will be possible to:
Put structured JSON events in Zbrunk using http collector API
Get the events from Zbrunk using http search API
Make information panels based on these search requests and place them on dashboards
Why is it necessary? Well, I’ve worked a lot with Splunk in recent years. I like the main concepts, and I think working with the events is a very effective and natural way of processing and presenting data. But for my tasks (Asset Management, Compliance Management, Vulnerability Management) with several hundred megabytes of raw data per day to process and dashboards that need to be updated once or several times a day Splunk felt like an overkill. You really don’t need such performance for these tasks.
And, considering the price, it only makes sense if your organization already uses Splunk for other tasks. After Splunk decision to leave Russian market, this became even more obvious, so many people began to look for alternatives for possible and, as far as possible, painless migration.
We are realistic, the performance and search capabilities of Zbrunk will be MUCH worse. It’s impossible to make such universal and effective solution as a pet project without any resources. So, don’t expect something that will process terabytes of logs in near real time, the goal is completely different. But if you want same basic tool to make dashboards, it worth a try. ?
Now, after first weekend of coding and planning it’s possible to send events to Zbrunk just like you do it using the Splunk HTTP Event Collector and they appear in MongoDB:
Thus, it will be very easy to use your existing custom connectors if you already have some. The next step is to make basic http search API, prepare dashboard data using these search requests and somehow show these dashboards, for example, in Grafana. Stay tuned and welcome to participate. ?
How to get the Organization Units (OU) and Hosts from Microsoft Active Directory using Python ldap3. I recently figured out how to work with Microsoft Active Directory using Python 3. I wanted to get a hierarchy of Organizational Units (OUs) and all the network hosts associated with these OUs to search for possible anomalies. If you are not familiar with AD, here is a good thread about the difference between AD Group and OU.
It seems much easier to solve such tasks using PowerShell. But it will probably require a Windows server. So I leave this for the worst scenario. 🙂 There is also a PowerShell Core, which should support Linux, but I haven’t tried it yet. If you want to use Python, there is a choice from the native python ldap3 module and Python-ldap, which is a wrapper for the OpenLDAP client. I didn’t find any interesting high-level functions in Python-ldap and finally decided to use ldap3.
Kaspersky Security Center 11 API: getting information about hosts and installed products. I spent a lot of time last week working with the new API of Kaspersky Security Center 11. KSC is the administration console for Kaspersky Endpoint Protection products. And it has some pretty interesting features besides the antivirus/antimalware, for example, vulnerability and patch management. So, the possible integrations with other security systems might be quite useful.
A fully functional API was firstly presented in this latest version of KSC. It’s is documented pretty well, but in some strange way. In fact, the documentation is one huge .chm file that lists the classes, methods of these classes and data structures with brief descriptions. It’s not a cookbook that gives a solution for the problem. In fact, you will need to guess which methods of which classes should be used to solve your particular task.
For the first task, I decided to export the versions of Kaspersky products installed on the hosts. It is useful to control the endpoint protection process: whether all the necessary agents and products were installed on the hosts or not (and why not).
This is my personal blog. The opinions expressed here are my own and not of my employer. All product names, logos, and brands are property of their respective owners. All company, product and service names used here for identification purposes only. Use of these names, logos, and brands does not imply endorsement. You can freely use materials of this site, but it would be nice if you place a link on https://avleonov.com and send message about it at me@avleonov.com or contact me any other way.