Seemplicity has been recognized as a 2023 Gartner Cool Vendor! Read the Report

×
×

Seemplicity secures a total of $32M to bring the future of work to security teams!

Seemplicity
Read More

Different Approaches For Vulnerability Prioritization

This article explores several methods to measure vulnerability severity, and compares them using real data

In an ideal world, a security team would be able to address every vulnerability detected within their organization. However, the reality is that an average organization may face hundreds of thousands or even millions of unpatched vulnerabilities. Therefore, it is crucial to prioritize vulnerabilities based on their threat level to the organization’s security.

There are two main factors to consider for each vulnerability. The first factor is where the vulnerability is found. Is it on a business-critical asset or a development server? Is the asset exposed to the internet or protected by multiple layers of security?

The second factor is the severity of the vulnerability itself. Can it allow complete control over the asset or only lead to information leakage? Is it easy or even possible to exploit? Does it require privileges or can it run remotely? What if it is already being exploited in the wild?

In this article, we will focus on the second factor, explore several methods to measure vulnerability severity, and compare them using real data.

Vulnerability Scoring Methods

Over the years, different organizations have developed various methods to measure the severity of vulnerabilities. The following are the most widely used methods today:

CVSSv3.1 – “Common Vulnerability Scoring System“

Released in 2019 (v3.0 in 2015) by FIRST, CVSS Base Score aims to measure the technical severity of a vulnerability.

The score consists of two groups of metrics:

  •   Impact metrics – measures what the vulnerability allows an attacker to do in three metrics: Confidentiality, Integrity, and Availability
  •   Exploitability metrics – measure how easy it is to exploit the vulnerability, considering four metrics: Attack Vector, Attack Complexity, Privileges Required, and User Interaction

These metrics form a CVSS Vector which, using a public equation, calculates a CVSS base score ranging from 0 to 10.

Different organizations use this method to calculate and publish CVSS Base Score, but the most commonly used is the NVD Vulnerability Database.

While CVSS Base Score covers many aspects of a vulnerability, it’s still missing some and often accused of being outdated or misleading, see:

CVSS critics usually argue it measures theoretical severity rather than actual risk. For example, an RCE that requires no privileges or user interaction will get a CVSS score of around 9 or more, but if this vulnerability is theoretical and considered to be impossible to exploit, it will still get a critical CVSS score, while there is no reason to rush to remediate it. Also, CVSS Base Score does not change if new exploits are discovered.
CVSS attempts to address these concerns with the “CVSS Temporal Score” which reflects the changing characteristics of a vulnerability over time by incorporating metrics such as Exploit Maturity, which we will discuss soon.

EPSS – Exploit Prediction Scoring System

First released in 2021 by the FIRST organization, the Exploit Prediction Scoring System (EPSS) is a data-driven approach that aims to estimate the likelihood, or probability, of a software vulnerability being exploited in the wild within the next 30 days. Each vulnerability is assigned a score ranging from 0 to 1, representing the probability of exploitation, along with a percentile indicating its relative score compared to other vulnerabilities.

EPSS addresses a major concern with CVSS (Common Vulnerability Scoring System) by providing daily updates and assessing the actual risk of a vulnerability being exploited. However, EPSS does not take into account the impact of a vulnerability. This means that even a vulnerability with very low impact could have a high EPSS score, but it does not necessarily imply that immediate remediation is required.

It is important to note that there is a strong correlation between a CVSS Base Score and an EPSS score. This correlation exists because attackers tend to target vulnerabilities that have a significant impact or are easier to exploit, which are the factors considered in the CVSS score.

CISA KEV – Known Exploited Vulnerabilities Catalog

Released by the Cybersecurity and Infrastructure Security Agency (CISA) in 2021, the Known Exploited Vulnerabilities (KEV) Catalog is not a scoring system but rather a list of vulnerabilities with substantial evidence of active exploits in the wild. Although the catalog contains fewer than 1,000 CVEs, all of them are known to have been exploited.

Similar to EPSS, the KEV catalog does not take into account the impact of the vulnerabilities. However, there is a strong correlation with CVSS scores for the same reason as in EPSS: attackers are more likely to exploit vulnerabilities with significant impact.

Exploit Maturity / Exploit Code Maturity

The concept of exploit maturity or exploit code maturity is a specific metric that reflects the current status of exploit techniques, availability of exploit code, or the extent of active exploitation observed in real-world scenarios (“in-the-wild” exploitation).

This metric can be incorporated into various scoring systems, such as the CVSS Temporal Score or certain vendor-specific scoring systems.

The official CVSSv3.1 scale for this metric includes the following levels: Unproven, Proof-of-Concept, Functional, High.

Alternatively, other scales may utilize terms like “Weaponized” and “Exploited In the Wild” to describe different stages of exploit maturity.

While EPSS aims to predict the existence of a mature exploit, this particular method focuses on assessing the current state of exploitation for a vulnerability.

Deep Dive Analysis

To better understand the relationship between these different scoring methods, we collected data on ~34,000 vulnerabilities published since 2020 from various leading vulnerability management vendors.

Each vendor uses a different scale for “Exploit Maturity” so we normalized it to the CVSS scale of: Unknown, Proof of Concept, Functional, High

We enriched the data with EPSS Percentile from the FIRST API and “Known Exploited Vulnerabilities” from CISA.

The data is presented in the following Sankey Diagram that visualizes the connections between CVSS, Exploit Maturity, EPSS, and KEV.

 

Based on this diagram, we can draw the following conclusions:

  •   The criticisms against CVSS seem valid. Approximately half of the vulnerabilities rated as High/Critical do not have any evidence of exploitation according to the “Exploit Maturity” scale. Only a small portion have a “High” Exploit Maturity rating.
  •   EPSS is not effective at identifying vulnerabilities that are already being exploited. A significant number of CVEs with very low EPSS scores in the bottom 80% have strong evidence of exploitability based on Maturity or KEV. This is not surprising, as First states “EPSS is best used when there is no other evidence of active exploitation”.
  •   Most CVEs lack evidence of exploitability (not listed in KEV and have an “Unknown” maturity rating). For these CVEs, EPSS is the only method available to assess exploitability, providing additional information where Maturity/KEV falls short. The “Unknown”/”False” CVEs are distributed evenly among different EPSS percentiles, which can aid in prioritization.

So, How Should One Prioritize Vulnerabilities?

Each scoring system has its pros and cons, with one system compensating for the limitations of another. Therefore, the most comprehensive approach to prioritizing vulnerabilities is to use all these methods together.

Relying solely on one scoring system can result in blind spots and an incomplete understanding of the overall risk. Every organization is different, and a scoring system that works for one may not work for another. However, here is an example of a vulnerability prioritization process:

 

Priority   Description  Conditions Numbers in our dataset
1 Imminent Threats CVEs with

  • Critical CVSS score
  • Strong evidence for mature exploits (High Maturity or exists in KEV).
312 (~2.6%)
2 Soon-to-be Threats CVEs with

  • Critical CVSS score
  • High prediction for exploitability (90%+ EPSS percentile) or a high evidence of a mature exploit (Functional/High Exploit Maturity)
351 (~2.9%)
3 Long-Term Threats CVEs with

  • High/Critical CVSS
  • Some prediction for exploitability (80%+ EPSS percentile) or some evidence of a mature exploit (POC/ Functional/High Exploit Maturity)
2273 (~19%)
4 Not a current threat Rest of CVEs ~31K (~75%)

 

Please note that this is just an example, and each organization should choose appropriate thresholds and methods that align with their needs and specific threats.

Conclusion

Organizations face an overwhelming number of risks today, making it difficult for them to address their growing findings backlog. To operate effectively, they need a systematic way to apply all of the criteria described in this article using a uniform scale – regardless of the security solution or vendor that detected each risk. Organizations also need a way to apply the various prioritization methods in a way that works best for them and is aligned to their unique set of risks. Once risk scores are normalized and prioritization is tailored to the organization’s needs, it is far easier to automate the workflows associated with shrinking the risk backlog – which is what’s required to continuously enhance their overall security posture and reduce the likelihood of attacks successfully compromising their systems and data.