Tag Archives: json

Converting Nmap xml scan reports to json

Unfortunately, Nmap can not save the results in json. All available output options:

-oN <filespec> (normal output)
-oX <filespec> (XML output)
-oS <filespec> (ScRipT KIdd|3 oUTpuT)
-oG <filespec> (grepable output)
-oA <basename> (Output to all formats)

And processing xml results may not be easy an easy task. Just look how I analyze the contents of the Nessus report in “Parsing Nessus v2 XML reports with python“. Not the most readable code, right? And what alternatives do we have?

Nmap json scan report

Formal XML to json conversion is impossible. Formats are very different. However, there are python modules, for example xmltodict, that can reliably convert XML into Python structures of dictionaries, lists and strings. However, they have to change some names of parameters to avoid collisions. In my opinion this is not a big price for convenience.

So, let’s see how this will work for Nmap command:

nmap -sV -oX nmap_output.xml avleonov.com 1>/dev/null 2>/dev/null

Continue reading

Masking Vulnerability Scan reports

Continuing the series of posts about Kenna (“Analyzing Vulnerability Scan data“, “Connectors and REST API“) and similar services. Is it actually safe to send your vulnerability data to some external cloud service for analysis? Leakage of such information can potentially cause great damage to your organization, right?

Masking Vulnerability Scans

It’s once again a problem of trust to vendor. IMHO, in some cases it may make sense to hide the real hostnames and ip-addresses of the target hosts in scan reports. So, it would be clear for analysis vendor that some critical vulnerability exists somewhere, but it would not be clear where exactly.

To do this, each hostname/ip-address should be replaced to some values of similar type and should be replaced on the same value each time. So the algorithms of Kenna-like service could work with this masked reports. This mean that we need to create a replacement dictionary.

Continue reading

Confluence REST API for reading and updating wiki pages

In previous posts I wrote how to automate the work with Atlassian Jira, including automated ticket labeling. Now let’s try to use REST API of another popular Atlassian product – Confluence wiki engine.

Confluence REST API

What you may want to automate in Confluence? Obviously, it may be useful to read the pages that your colleagues regularly update and then use this data in some scripts as an input. You may also want to update your own Confluence pages, for example to post Vulnerability Scanning results. 😉

Continue reading

Vulchain Scanner: 5 basic principles

New Year holidays in Russia lasts 10 days this year! Isn’t it an excellent opportunity to start a new project? So, I decided to make my own active network vulnerability scanner – Vulchain.

Why? Well, first of all, it’s fun. You can make the architecture from scratch, see the difficulties invisible from the user side and try something new in software development as well.

Vulchain modular scanner

Basic principles of the project. This is not a dogma, but rather a general direction.

  1. Data layers. I would like to have this independent sets of data:
    • Raw data collections
    • Software versions detected from the raw data
    • Vulnerabilities detected from the software versions
    • Exploitability assessment data for the detected vulnerabilities
  2. Modularity. Most of functionality will be performed by the independent modules which read some data from one data level, and create some data on other data level.
  3. Transparency. Data is stored constantly on the all levels. You can easily figure out how the data was  processed, track the errors and modify modules.
  4. Neutrality. All modules are independent and easily replaceable. For example:
  5. Rationality. If it is possible to use some security utility, service or product, we will integrate with them, rather than writing our own analogue. We spend resources only on what will give us the maximum profit at a minimum of costs. 😉

Continue reading

Vulners.com vulnerability detection plugins for Burp Suite and Google Chrome

What is the main idea of version-based vulnerability detection, especially for Web Applications? With an access to the HTTP response (html, headers, scripts, etc.), you can get the name and version of some standards web application (e.g. CMS, CRM, wiki, task tracker) or names and versions of software components that this web application uses: web server, libraries, frameworks, and so on.

Vulners plugins for Burp Suite Professional and Google Chrome

Next step is to get all known vulnerabilities and exploits  for this software. This is the typical task for Vulners.com – largest database and security content searching system (see “Vulners – Google for hacker“).

So, guys from Vulners Team made a set of useful regular expressions for detecting software names and versions – https://vulners.com/api/v3/burp/rules.  You can use this rules in your own scripts and if you want something that will work out of the box, you can try existing plugins for Burp Suite and Google Chrome.

In this post I would like to show how the detection rules work, present new Vulners Burp API and vulnerability detection plugins for Burp Suite and Google Chrome.

Continue reading

Atlassian Jira, Python and automated labeling

I have already wrote about Atlassian Jira automation in “Automated task processing with JIRA API“. But all examples there were with using of curl. So, I decided to make one more post about Jira API. This time with python examples and about labeling issues (nice wordplay, right? 😉 ).

Jira Labels and Python

You can use labeles for organizing issues on Jira Scrum and Kanban Boards, Jira Dashboards or just for advanced searching (e.g. labels = "LabelName")

Let’s start from the basics.

How to search Jira issues from your own python scripts?

It’s easy. Send a post request to /rest/api/2/search/ with some JQL expression. Jira server will return first 50 matching issues. If you need more, set a startAt parameter and repeat post requests while the number of issues you requested is less than total number of founded issues (parameter in response).

Continue reading

Downloading and analyzing NVD CVE feed

In previous post “New National Vulnerability Database visualizations and feeds” I mentioned JSON NVD feed.

NVD JSON feed parse python

Let’s see what data it contains, how to download and analyse it. First of all, we need to download all files with CVEs from NVD database and save them to some directory.

nvd feed json download

Unfortunately, there is no way to download all the content at once. Only one year archives. We need to get urls first. Url looks like this: https://static.nvd.nist.gov/feeds/json/cve/1.0/nvdcve-1.0-2017.json.zip. Then we will download them all.

Continue reading