Category Archives: Standard

Assessing Linux Security Configurations with SCAP Workbench

Assessing Linux Security Configurations with SCAP Workbench. Recently I had a chance to work with OpenSCAP. It’s a set of free and open-source tools for Linux Configuration Assessment and  a collection security content in SCAP (Security Content Automation Protocol) format.

In this post I will write about SCAP Workbench. It is a GUI application that can check the configuration of your local Linux host (or the remote host via ssh; note that agent installation is required), and show the settings that are not comply with some security standard, for example PCI DSS or DISA STIG.

SCAP Workbench PCI DSS CentOS7 localhost

Moreover, you can generate the script for automated remediation. You can also create your own scan profiles based on existing  SCAP content.

Continue reading

Who wants to be a PCI ASV?

Who wants to be a PCI ASV? I think, most of financial and trade companies know about vulnerability scanning mainly because of PCI DSS. Vulnerability Assessment is, of course, an important issue, but when regular scanning is prescribed in some critical standard it become much more important for businesses.

This post will be about PCI ASV from the point of view of a scanning vendor. I decided to figure out what technical requirements exist for ASV solutions and how difficult/expensive it is to become an ASV.

Perimeter scanning

Basically, PCI ASV scan is a form of automated network perimeter control, performed by an external organization. All Internet-facing hosts of merchants and service providers should be checked 4 times a year (quarterly) with Vulnerability Scanner by PCI ASV (PCI DSS Requirement 11.2.2.). It is necessary to check the effectiveness of patch management and other security measures that improve protection against Internet attacks.

Continue reading

Divination with Vulnerability Database

Divination with Vulnerability Database. Today I would like to write about a popular type of “security research” that really drives me crazy: when author takes public Vulnerability Base and, by analyzing it, makes different conclusions about software products or operating systems.

CVE Numbers their occult power and mystic virtues

The latest research of such type, was recently published in CNews – a popular Russian Internet portal about IT technologies. It is titled ““The brutal reality” of Information Security market: security software leads in the number of holes“.

The article is based on Flexera/Secunia whitepaper. The main idea is that various security software products are insecure, because of amount of vulnerability IDs related to this software existing in Flexera Vulnerability Database. In fact, the whole article is just a listing of such “unsafe” products and vendors (IBM Security, AlienVault USM and OSSIM, Palo Alto, McAfee, Juniper, etc.) and the expert commentary: cybercriminals may use vulnerabilities in security products and avoid blocking their IP-address; customers should focus on the security of their proprietary code first of all, and then include security products in the protection scheme.

What can I say about these opuses of this kind?

They provide “good” practices for software vendors:

  • Hide information about vulnerabilities in your products
  • Don’t release any security bulletins
  • Don’t request CVE-numbers from MITRE for known vulnerabilities in your products

And then analysts and journalists won’t write that your product is “a leader in the number of security holes”. Profit! 😉

Continue reading

Forever “reserved” CVEs

Forever “reserved” CVEs. In this post I would like to provide some links, that you can use to find out necessary information about vulnerability by its CVE ID. I also want to share my amazement, how the method of using the CVE identifiers is changing.

Reserved CVE

Traditionally, CVE was a global identifier that most of vulnerabilities had. Have you found malicious bug in some software? Send a brief description to MITRE and you will receive CVE id. Some time later NIST will analyze this CVE, will add CVSS vector and CPEs and will put a new item to the NVD database. MITRE and NVD CVE databases were really useful source of information.

Continue reading

PCI DSS 3.2 and Vulnerability Intelligence

PCI DSS 3.2 and Vulnerability Intelligence. Establish a process to identify security vulnerabilities, using reputable outside sources for security vulnerability information… It’s one of the requirements of PCI DSS v3.2 (The Payment Card Industry Data Security Standard). It’s not about regular scans, as you could think. It is actually about monitoring web-sites and mailing lists where information about vulnerabilities is published. It’s very similar to what Vulnerability Intelligence systems have to do, isn’t it? A great opportunity for me to speculate about this class of products and deal with related PCI requirement. In this post I will mention following solutions: Flexera VIM, Rapid7 Nexpose NOW, Vulners.com and Qualys ThreatPROTECT.

PCI DSS 3.2 and Vulnerability Intelligence

Term “Vulnerability Intelligence” is almost exclusively used by only one security company – Secunia, or how it is called now Flexera Software. But I like this term more than “Threat Intelligence”, a term that many VM vendor use, but historically it is more about traffic and network attacks. Let’s see how Vulnerability Intelligence solutions was developed, and how they can be used (including requirements of PCI Compliance).

Continue reading

Federated-Style CVE

Federated-Style CVE. It seems like MITRE Corporation wants to cut the costs of security projects. Once again. They transfered OVAL Project to the Center for Internet Security. Now MITRE announced the launch of a “Federated-Style CVE ID”. The idea is to give oportunity for other authorities to issue CVE IDs in special format.

cve

The federated ID syntax will be CVE-CCCIII-YYYY-NNNN…N, where “CCC” encodes the issuing authority’s country and “III” encodes the issuing authority. At its launch, MITRE will be the only issuing authority, but we expect to quickly add others to address the needs of the research and discloser communities, as well as the cybersecurity community as a whole. This new federated ID system will significantly enhance the early stage vulnerability mitigation coordination, and reduce the time lapse between request and issuance

Continue reading