CVSS v3.1 Specification Document (2023)

Also available in PDF format (469KiB).

The Common Vulnerability Scoring System (CVSS) is an open framework forcommunicating the characteristics and severity of software vulnerabilities. CVSSconsists of three metric groups: Base, Temporal, and Environmental. The Basegroup represents the intrinsic qualities of a vulnerability that are constantover time and across user environments, the Temporal group reflects thecharacteristics of a vulnerability that change over time, and the Environmentalgroup represents the characteristics of a vulnerability that are unique to auser's environment. The Base metrics produce a score ranging from 0 to 10, whichcan then be modified by scoring the Temporal and Environmental metrics. A CVSSscore is also represented as a vector string, a compressed textualrepresentation of the values used to derive the score. This document providesthe official specification for CVSS version 3.1.

The most current CVSS resources can be found at

CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profitorganization, whose mission is to help computer security incident response teamsacross the world. FIRST reserves the right to update CVSS and this documentperiodically at its sole discretion. While FIRST owns all right and interest inCVSS, it licenses it to the public freely for use, subject to the conditionsbelow. Membership in FIRST is not required to use or implement CVSS. FIRST does,however, require that any individual or entity using CVSS give properattribution, where applicable, that CVSS is owned by FIRST and used bypermission. Further, FIRST requires as a condition of use that any individual orentity which publishes scores conforms to the guidelines described in thisdocument and provides both the score and the scoring vector so others canunderstand how the score was derived.

The Common Vulnerability Scoring System (CVSS) captures the principal technicalcharacteristics of software, hardware and firmware vulnerabilities. Its outputsinclude numerical scores indicating the severity of a vulnerability relative toother vulnerabilities.

CVSS is composed of three metric groups: Base, Temporal, and Environmental. TheBase Score reflects the severity of a vulnerability according to its intrinsiccharacteristics which are constant over time and assumes the reasonable worstcase impact across different deployed environments. The Temporal Metrics adjustthe Base severity of a vulnerability based on factors that change over time,such as the availability of exploit code. The Environmental Metrics adjust theBase and Temporal severities to a specific computing environment. They considerfactors such as the presence of mitigations in that environment.

Base Scores are usually produced by the organization maintaining the vulnerableproduct, or a third party scoring on their behalf. It is typical for only theBase Metrics to be published as these do not change over time and are common toall environments. Consumers of CVSS should supplement the Base Score withTemporal and Environmental Scores specific to their use of the vulnerableproduct to produce a severity more accurate for their organizationalenvironment. Consumers may use CVSS information as input to an organizationalvulnerability management process that also considers factors that are not partof CVSS in order to rank the threats to their technology infrastructure and makeinformed remediation decisions. Such factors may include: number of customers ona product line, monetary losses due to a breach, life or property threatened, orpublic sentiment on highly publicized vulnerabilities. These are outside thescope of CVSS.

The benefits of CVSS include the provision of a standardized vendor and platformagnostic vulnerability scoring methodology. It is an open framework, providingtransparency to the individual characteristics and methodology used to derive ascore.

1.1. Metrics

CVSS is composed of three metric groups: Base, Temporal, and Environmental, eachconsisting of a set of metrics, as shown in Figure 1.

CVSS v3.1 Specification Document (1)

Figure 1: CVSS Metric Groups

The Base metric group represents the intrinsic characteristics of avulnerability that are constant over time and across user environments. It iscomposed of two sets of metrics: the Exploitability metrics and the Impactmetrics.

The Exploitability metrics reflect the ease and technical means by which thevulnerability can be exploited. That is, they represent characteristics of thething that is vulnerable, which we refer to formally as the vulnerablecomponent. The Impact metrics reflect the direct consequence of a successfulexploit, and represent the consequence to the thing that suffers the impact,which we refer to formally as the impacted component.

While the vulnerable component is typically a software application, module,driver, etc. (or possibly a hardware device), the impacted component could be asoftware application, a hardware device or a network resource. This potentialfor measuring the impact of a vulnerability other than the vulnerable component,was a key feature introduced with CVSS v3.0. This property is captured by theScope metric, discussed later.

The Temporal metric group reflects the characteristics of a vulnerability thatmay change over time but not across user environments. For example, the presenceof a simple-to-use exploit kit would increase the CVSS score, while the creationof an official patch would decrease it.

The Environmental metric group represents the characteristics of a vulnerabilitythat are relevant and unique to a particular user’s environment. Considerationsinclude the presence of security controls which may mitigate some or allconsequences of a successful attack, and the relative importance of a vulnerablesystem within a technology infrastructure.

Each of these metrics are discussed in further detail below. The User Guidecontains scoring rubrics for the Base Metrics that may be useful when scoring.

1.2. Scoring

When the Base metrics are assigned values by an analyst, the Base equationcomputes a score ranging from 0.0 to 10.0 as illustrated in Figure 2.

CVSS v3.1 Specification Document (2)

Figure 2: CVSS Metrics and Equations

Specifically, the Base equation is derived from two sub equations: theExploitability sub-score equation, and the Impact sub-score equation. TheExploitability sub-score equation is derived from the Base Exploitabilitymetrics, while the Impact sub-score equation is derived from the Base Impactmetrics.

(Video) CompTIA CySA+ Full Course Part 12: Vulnerability Scan Results and CVSS Scores

The Base Score can then be refined by scoring the Temporal and Environmentalmetrics in order to more accurately reflect the relative severity posed by avulnerability to a user’s environment at a specific point in time. Scoring theTemporal and Environmental metrics is not required, but is recommended for moreprecise scores.

Generally, the Base and Temporal metrics are specified by vulnerability bulletinanalysts, security product vendors, or application vendors because theytypically possess the most accurate information about the characteristics of avulnerability. The Environmental metrics are specified by end-user organizationsbecause they are best able to assess the potential impact of a vulnerabilitywithin their own computing environment.

Scoring CVSS metrics also produces a vector string, a textual representation ofthe metric values used to score the vulnerability. This vector string is aspecifically formatted text string that contains each value assigned to eachmetric, and should always be displayed with the vulnerability score.

The scoring equations and vector string are explained further below.

Note that all metrics should be scored under the assumption that the attackerhas already located and identified the vulnerability. That is, the analyst neednot consider the means by which the vulnerability was identified. In addition,it is likely that many different types of individuals will be scoringvulnerabilities (e.g., software vendors, vulnerability bulletin analysts,security product vendors), however, note that vulnerability scoring is intendedto be agnostic to the individual and their organization.

2.1. Exploitability Metrics

As previously mentioned, the Exploitability metrics reflect the characteristicsof the thing that is vulnerable, which we refer to formally as the vulnerablecomponent. Therefore, each of the Exploitability metrics listed below should bescored relative to the vulnerable component, and reflect the properties of thevulnerability that lead to a successful attack.

When scoring Base metrics, it should be assumed that the attacker has advancedknowledge of the weaknesses of the target system, including generalconfiguration and default defense mechanisms (e.g., built-in firewalls, ratelimits, traffic policing). For example, exploiting a vulnerability that resultsin repeatable, deterministic success should still be considered a Low value forAttack Complexity, independent of the attacker's knowledge or capabilities.Furthermore, target-specific attack mitigation (e.g., custom firewall filters,access lists) should instead be reflected in the Environmental metric scoringgroup.

Specific configurations should not impact any attribute contributing to the CVSSBase Score, i.e., if a specific configuration is required for an attack tosucceed, the vulnerable component should be scored assuming it is in thatconfiguration.

2.1.1. Attack Vector (AV)

This metric reflects the context by which vulnerability exploitation ispossible. This metric value (and consequently the Base Score) will be larger themore remote (logically, and physically) an attacker can be in order to exploitthe vulnerable component. The assumption is that the number of potentialattackers for a vulnerability that could be exploited from across a network islarger than the number of potential attackers that could exploit a vulnerabilityrequiring physical access to a device, and therefore warrants a greater BaseScore. The list of possible values is presented in Table 1.

Table 1: Attack Vector

Metric ValueDescription
Network (N)The vulnerable component is bound to the network stack and the set of possible attackers extends beyond the other options listed below, up to and including the entire Internet. Such a vulnerability is often termed “remotely exploitable” and can be thought of as an attack being exploitable at the protocol level one or more network hops away (e.g., across one or more routers). An example of a network attack is an attacker causing a denial of service (DoS) by sending a specially crafted TCP packet across a wide area network (e.g., CVE‑2004‑0230).
Adjacent (A)The vulnerable component is bound to the network stack, but the attack is limited at the protocol level to a logically adjacent topology. This can mean an attack must be launched from the same shared physical (e.g., Bluetooth or IEEE 802.11) or logical (e.g., local IP subnet) network, or from within a secure or otherwise limited administrative domain (e.g., MPLS, secure VPN to an administrative network zone). One example of an Adjacent attack would be an ARP (IPv4) or neighbor discovery (IPv6) flood leading to a denial of service on the local LAN segment (e.g., CVE‑2013‑6014).
Local (L)The vulnerable component is not bound to the network stack and the attacker’s path is via read/write/execute capabilities. Either:
  • the attacker exploits the vulnerability by accessing the target system locally (e.g., keyboard, console), or remotely (e.g., SSH); or
  • the attacker relies on User Interaction by another person to perform actions required to exploit the vulnerability (e.g., using social engineering techniques to trick a legitimate user into opening a malicious document).
Physical (P)The attack requires the attacker to physically touch or manipulate the vulnerable component. Physical interaction may be brief (e.g., evil maid attack[^1]) or persistent. An example of such an attack is a cold boot attack in which an attacker gains access to disk encryption keys after physically accessing the target system. Other examples include peripheral attacks via FireWire/USB Direct Memory Access (DMA).

Scoring Guidance: When deciding between Network and Adjacent, if an attack canbe launched over a wide area network or from outside the logically adjacentadministrative network domain, use Network. Network should be used even if theattacker is required to be on the same intranet to exploit the vulnerable system(e.g., the attacker can only exploit the vulnerability from inside a corporatenetwork).

2.1.2. Attack Complexity (AC)

This metric describes the conditions beyond the attacker’s control that mustexist in order to exploit the vulnerability. As described below, such conditionsmay require the collection of more information about the target, orcomputational exceptions. Importantly, the assessment of this metric excludesany requirements for user interaction in order to exploit the vulnerability(such conditions are captured in the User Interaction metric). If a specificconfiguration is required for an attack to succeed, the Base metrics should bescored assuming the vulnerable component is in that configuration. The BaseScore is greatest for the least complex attacks. The list of possible values ispresented in Table 2.

Table 2: Attack Complexity

Metric ValueDescription
Low (L)Specialized access conditions or extenuating circumstances do not exist. An attacker can expect repeatable success when attacking the vulnerable component.
High (H)A successful attack depends on conditions beyond the attacker's control. That is, a successful attack cannot be accomplished at will, but requires the attacker to invest in some measurable amount of effort in preparation or execution against the vulnerable component before a successful attack can be expected.[^2] For example, a successful attack may depend on an attacker overcoming any of the following conditions:
  • The attacker must gather knowledge about the environment in which the vulnerable target/component exists. For example, a requirement to collect details on target configuration settings, sequence numbers, or shared secrets.
  • The attacker must prepare the target environment to improve exploit reliability. For example, repeated exploitation to win a race condition, or overcoming advanced exploit mitigation techniques.
  • The attacker must inject themselves into the logical network path between the target and the resource requested by the victim in order to read and/or modify network communications (e.g., a man in the middle attack).

As described in Section 2.1, detailed knowledge of the vulnerable component isoutside the scope of Attack Complexity. Refer to that section for additionalguidance when scoring Attack Complexity when target-specific attack mitigationis present.

2.1.3. Privileges Required (PR)

This metric describes the level of privileges an attacker must possess beforesuccessfully exploiting the vulnerability. The Base Score is greatest if noprivileges are required. The list of possible values is presented in Table 3.

Table 3: Privileges Required

Metric ValueDescription
None (N)The attacker is unauthorized prior to attack, and therefore does not require any access to settings or files of the vulnerable system to carry out an attack.
Low (L)The attacker requires privileges that provide basic user capabilities that could normally affect only settings and files owned by a user. Alternatively, an attacker with Low privileges has the ability to access only non-sensitive resources.
High (H)The attacker requires privileges that provide significant (e.g., administrative) control over the vulnerable component allowing access to component-wide settings and files.

Scoring Guidance: Privileges Required is usually None for hard-coded credentialvulnerabilities or vulnerabilities requiring social engineering (e.g., reflectedcross-site scripting, cross-site request forgery, or file parsing vulnerabilityin a PDF reader).

2.1.4. User Interaction (UI)

This metric captures the requirement for a human user, other than the attacker,to participate in the successful compromise of the vulnerable component. Thismetric determines whether the vulnerability can be exploited solely at the willof the attacker, or whether a separate user (or user-initiated process) mustparticipate in some manner. The Base Score is greatest when no user interactionis required. The list of possible values is presented in Table 4.

Table 4: User Interaction

Metric ValueDescription
None (N)The vulnerable system can be exploited without interaction from any user.
Required (R)Successful exploitation of this vulnerability requires a user to take some action before the vulnerability can be exploited. For example, a successful exploit may only be possible during the installation of an application by a system administrator.

2.2. Scope (S)

The Scope metric captures whether a vulnerability in one vulnerable componentimpacts resources in components beyond its security scope.

Formally, a security authority is a mechanism (e.g., an application, anoperating system, firmware, a sandbox environment) that defines and enforcesaccess control in terms of how certain subjects/actors (e.g., human users,processes) can access certain restricted objects/resources (e.g., files, CPU,memory) in a controlled manner. All the subjects and objects under thejurisdiction of a single security authority are considered to be under onesecurity scope. If a vulnerability in a vulnerable component can affect acomponent which is in a different security scope than the vulnerablecomponent, a Scope change occurs. Intuitively, whenever the impact of avulnerability breaches a security/trust boundary and impacts components outsidethe security scope in which vulnerable component resides, a Scope change occurs.

The security scope of a component encompasses other components that providefunctionality solely to that component, even if these other components havetheir own security authority. For example, a database used solely by oneapplication is considered part of that application’s security scope even if thedatabase has its own security authority, e.g., a mechanism controlling access todatabase records based on database users and associated database privileges.

(Video) CVSS: Measuring vulnerability severity

The Base Score is greatest when a scope change occurs. The list of possiblevalues is presented in Table 5.

Table 5: Scope

Metric ValueDescription
Unchanged (U)An exploited vulnerability can only affect resources managed by the same security authority. In this case, the vulnerable component and the impacted component are either the same, or both are managed by the same security authority.
Changed (C)An exploited vulnerability can affect resources beyond the security scope managed by the security authority of the vulnerable component. In this case, the vulnerable component and the impacted component are different and managed by different security authorities.

2.3. Impact Metrics

The Impact metrics capture the effects of a successfully exploited vulnerabilityon the component that suffers the worst outcome that is most directly andpredictably associated with the attack. Analysts should constrain impacts to areasonable, final outcome which they are confident an attacker is able toachieve.

Only the increase in access, privileges gained, or other negative outcome as aresult of successful exploitation should be considered when scoring the Impactmetrics of a vulnerability. For example, consider a vulnerability that requiresread-only permissions prior to being able to exploit the vulnerability. Aftersuccessful exploitation, the attacker maintains the same level of read access,and gains write access. In this case, only the Integrity impact metric should bescored, and the Confidentiality and Availability Impact metrics should be set asNone.

Note that when scoring a delta change in impact, the final impact should beused. For example, if an attacker starts with partial access to restrictedinformation (Confidentiality Low) and successful exploitation of thevulnerability results in complete loss in confidentiality (ConfidentialityHigh), then the resultant CVSS Base Score should reference the “end game” Impactmetric value (Confidentiality High).

If a scope change has not occurred, the Impact metrics should reflect theConfidentiality, Integrity, and Availability impacts to the vulnerablecomponent. However, if a scope change has occurred, then the Impact metricsshould reflect the Confidentiality, Integrity, and Availability impacts toeither the vulnerable component, or the impacted component, whichever suffersthe most severe outcome.

2.3.1. Confidentiality (C)

This metric measures the impact to the confidentiality of the informationresources managed by a software component due to a successfully exploitedvulnerability. Confidentiality refers to limiting information access anddisclosure to only authorized users, as well as preventing access by, ordisclosure to, unauthorized ones. The Base Score is greatest when the loss tothe impacted component is highest. The list of possible values is presented inTable 6.

Table 6: Confidentiality

Metric ValueDescription
High (H)There is a total loss of confidentiality, resulting in all resources within the impacted component being divulged to the attacker. Alternatively, access to only some restricted information is obtained, but the disclosed information presents a direct, serious impact. For example, an attacker steals the administrator's password, or private encryption keys of a web server.
Low (L)There is some loss of confidentiality. Access to some restricted information is obtained, but the attacker does not have control over what information is obtained, or the amount or kind of loss is limited. The information disclosure does not cause a direct, serious loss to the impacted component.
None (N)There is no loss of confidentiality within the impacted component.

2.3.2. Integrity (I)

This metric measures the impact to integrity of a successfully exploitedvulnerability. Integrity refers to the trustworthiness and veracity ofinformation. The Base Score is greatest when the consequence to the impactedcomponent is highest. The list of possible values is presented in Table 7.

Table 7: Integrity

Metric ValueDescription
High (H)There is a total loss of integrity, or a complete loss of protection. For example, the attacker is able to modify any/all files protected by the impacted component. Alternatively, only some files can be modified, but malicious modification would present a direct, serious consequence to the impacted component.
Low (L)Modification of data is possible, but the attacker does not have control over the consequence of a modification, or the amount of modification is limited. The data modification does not have a direct, serious impact on the impacted component.
None (N)There is no loss of integrity within the impacted component.

2.3.3. Availability (A)

This metric measures the impact to the availability of the impacted componentresulting from a successfully exploited vulnerability. While the Confidentialityand Integrity impact metrics apply to the loss of confidentiality or integrityof data (e.g., information, files) used by the impacted component, this metricrefers to the loss of availability of the impacted component itself, such as anetworked service (e.g., web, database, email). Since availability refers to theaccessibility of information resources, attacks that consume network bandwidth,processor cycles, or disk space all impact the availability of an impactedcomponent. The Base Score is greatest when the consequence to the impactedcomponent is highest. The list of possible values is presented in Table 8.

Table 8: Availability

Metric ValueDescription
High (H)There is a total loss of availability, resulting in the attacker being able to fully deny access to resources in the impacted component; this loss is either sustained (while the attacker continues to deliver the attack) or persistent (the condition persists even after the attack has completed). Alternatively, the attacker has the ability to deny some availability, but the loss of availability presents a direct, serious consequence to the impacted component (e.g., the attacker cannot disrupt existing connections, but can prevent new connections; the attacker can repeatedly exploit a vulnerability that, in each instance of a successful attack, leaks a only small amount of memory, but after repeated exploitation causes a service to become completely unavailable).
Low (L)Performance is reduced or there are interruptions in resource availability. Even if repeated exploitation of the vulnerability is possible, the attacker does not have the ability to completely deny service to legitimate users. The resources in the impacted component are either partially available all of the time, or fully available only some of the time, but overall there is no direct, serious consequence to the impacted component.
None (N)There is no impact to availability within the impacted component.

The Temporal metrics measure the current state of exploit techniques or codeavailability, the existence of any patches or workarounds, or the confidence inthe description of a vulnerability.

3.1. Exploit Code Maturity (E)

This metric measures the likelihood of the vulnerability being attacked, and istypically based on the current state of exploit techniques, exploit codeavailability, or active, “in-the-wild” exploitation. Public availability ofeasy-to-use exploit code increases the number of potential attackers byincluding those who are unskilled, thereby increasing the severity of thevulnerability. Initially, real-world exploitation may only be theoretical.Publication of proof-of-concept code, functional exploit code, or sufficienttechnical details necessary to exploit the vulnerability may follow.Furthermore, the exploit code available may progress from a proof-of-conceptdemonstration to exploit code that is successful in exploiting the vulnerabilityconsistently. In severe cases, it may be delivered as the payload of anetwork-based worm or virus or other automated attack tools.

The list of possible values is presented in Table 9. The more easily avulnerability can be exploited, the higher the vulnerability score.

Table 9 : Exploit Code Maturity

Metric ValueDescription
Not Defined (X)Assigning this value indicates there is insufficient information to choose one of the other values, and has no impact on the overall Temporal Score, i.e., it has the same effect on scoring as assigning High.
High (H)Functional autonomous code exists, or no exploit is required (manual trigger) and details are widely available. Exploit code works in every situation, or is actively being delivered via an autonomous agent (such as a worm or virus). Network-connected systems are likely to encounter scanning or exploitation attempts. Exploit development has reached the level of reliable, widely available, easy-to-use automated tools.
Functional (F)Functional exploit code is available. The code works in most situations where the vulnerability exists.
Proof-of-Concept (P)Proof-of-concept exploit code is available, or an attack demonstration is not practical for most systems. The code or technique is not functional in all situations and may require substantial modification by a skilled attacker.
Unproven (U)No exploit code is available, or an exploit is theoretical.

The Remediation Level of a vulnerability is an important factor forprioritization. The typical vulnerability is unpatched when initially published.Workarounds or hotfixes may offer interim remediation until an official patch orupgrade is issued. Each of these respective stages adjusts the Temporal Scoredownwards, reflecting the decreasing urgency as remediation becomes final. Thelist of possible values is presented in Table 10. The less official andpermanent a fix, the higher the vulnerability score.

Table 10: Remediation Level

Metric ValueDescription
Not Defined (X)Assigning this value indicates there is insufficient information to choose one of the other values, and has no impact on the overall Temporal Score, i.e., it has the same effect on scoring as assigning Unavailable.
Unavailable (U)There is either no solution available or it is impossible to apply.
Workaround (W)There is an unofficial, non-vendor solution available. In some cases, users of the affected technology will create a patch of their own or provide steps to work around or otherwise mitigate the vulnerability.
Temporary Fix (T)There is an official but temporary fix available. This includes instances where the vendor issues a temporary hotfix, tool, or workaround.
Official Fix (O)A complete vendor solution is available. Either the vendor has issued an official patch, or an upgrade is available.

3.3. Report Confidence (RC)

This metric measures the degree of confidence in the existence of thevulnerability and the credibility of the known technical details. Sometimes onlythe existence of vulnerabilities is publicized, but without specific details.For example, an impact may be recognized as undesirable, but the root cause maynot be known. The vulnerability may later be corroborated by research whichsuggests where the vulnerability may lie, though the research may not becertain. Finally, a vulnerability may be confirmed through acknowledgment by theauthor or vendor of the affected technology. The urgency of a vulnerability ishigher when a vulnerability is known to exist with certainty. This metric alsosuggests the level of technical knowledge available to would-be attackers. Thelist of possible values is presented in Table 11. The more a vulnerability isvalidated by the vendor or other reputable sources, the higher the score.

Table 11: Report Confidence

Metric ValueDescription
Not Defined (X)Assigning this value indicates there is insufficient information to choose one of the other values, and has no impact on the overall Temporal Score, i.e., it has the same effect on scoring as assigning Confirmed.
Confirmed (C)Detailed reports exist, or functional reproduction is possible (functional exploits may provide this). Source code is available to independently verify the assertions of the research, or the author or vendor of the affected code has confirmed the presence of the vulnerability.
Reasonable (R)Significant details are published, but researchers either do not have full confidence in the root cause, or do not have access to source code to fully confirm all of the interactions that may lead to the result. Reasonable confidence exists, however, that the bug is reproducible and at least one impact is able to be verified (proof-of-concept exploits may provide this). An example is a detailed write-up of research into a vulnerability with an explanation (possibly obfuscated or “left as an exercise to the reader”) that gives assurances on how to reproduce the results.
Unknown (U)There are reports of impacts that indicate a vulnerability is present. The reports indicate that the cause of the vulnerability is unknown, or reports may differ on the cause or impacts of the vulnerability. Reporters are uncertain of the true nature of the vulnerability, and there is little confidence in the validity of the reports or whether a static Base Score can be applied given the differences described. An example is a bug report which notes that an intermittent but non-reproducible crash occurs, with evidence of memory corruption suggesting that denial of service, or possible more serious impacts, may result.

These metrics enable the analyst to customize the CVSS score depending on theimportance of the affected IT asset to a user’s organization, measured in termsof complementary/alternative security controls in place, Confidentiality,Integrity, and Availability. The metrics are the modified equivalent of Basemetrics and are assigned values based on the component placement withinorganizational infrastructure.

4.1. Security Requirements (CR, IR, AR)

These metrics enable the analyst to customize the CVSS score depending on theimportance of the affected IT asset to a user’s organization, measured in termsof Confidentiality, Integrity, and Availability. That is, if an IT assetsupports a business function for which Availability is most important, theanalyst can assign a greater value to Availability relative to Confidentialityand Integrity. Each Security Requirement has three possible values: Low, Medium,or High.

(Video) Common Vulnerability Scoring System [ CVSS ] | [ தமிழில் ]

The full effect on the environmental score is determined by the correspondingModified Base Impact metrics. That is, these metrics modify the environmentalscore by reweighting the Modified Confidentiality, Integrity, and Availabilityimpact metrics. For example, the Modified Confidentiality impact (MC) metric hasincreased weight if the Confidentiality Requirement (CR) is High. Likewise, theModified Confidentiality impact metric has decreased weight if theConfidentiality Requirement is Low. The Modified Confidentiality impact metricweighting is neutral if the Confidentiality Requirement is Medium. This sameprocess is applied to the Integrity and Availability requirements.

Note that the Confidentiality Requirement will not affect the Environmentalscore if the (Modified Base) confidentiality impact is set to None. Also,increasing the Confidentiality Requirement from Medium to High will not changethe Environmental score when the (Modified Base) impact metrics are set to High.This is because the Modified Impact Sub-Score (part of the Modified Base Scorethat calculates impact) is already at a maximum value of 10.

The list of possible values is presented in Table 12. For brevity, the sametable is used for all three metrics. The greater the Security Requirement, thehigher the score (recall that Medium is considered the default).

Table 12: Security Requirements

Metric ValueDescription
Not Defined (X)Assigning this value indicates there is insufficient information to choose one of the other values, and has no impact on the overall Environmental Score, i.e., it has the same effect on scoring as assigning Medium.
High (H)Loss of [Confidentiality | Integrity | Availability] is likely to have a catastrophic adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).
Medium (M)Loss of [Confidentiality | Integrity | Availability] is likely to have a serious adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).
Low (L)Loss of [Confidentiality | Integrity | Availability] is likely to have only a limited adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).

4.2. Modified Base Metrics

These metrics enable the analyst to override individual Base metrics based onspecific characteristics of a user’s environment. Characteristics that affectExploitability, Scope, or Impact can be reflected via an appropriately modifiedEnvironmental Score.

The full effect on the Environmental score is determined by the correspondingBase metrics. That is, these metrics modify the Environmental Score byoverriding Base metric values, prior to applying the Environmental SecurityRequirements. For example, the default configuration for a vulnerable componentmay be to run a listening service with administrator privileges, for which acompromise might grant an attacker Confidentiality, Integrity, and Availabilityimpacts that are all High. Yet, in the analyst’s environment, that same Internetservice might be running with reduced privileges; in that case, the ModifiedConfidentiality, Modified Integrity, and Modified Availability might each be setto Low.

For brevity, only the names of the Modified Base metrics are mentioned. EachModified Environmental metric has the same values as its corresponding Basemetric, plus a value of Not Defined. Not Defined is the default and uses themetric value of the associated Base metric.

The intent of this metric is to define the mitigations in place for a givenenvironment. It is acceptable to use the modified metrics to representsituations that increase the Base Score. For example, the default configurationof a component may require high privileges to access a particular function, butin the analyst’s environment there may be no privileges required. The analystcan set Privileges Required to High and Modified Privileges Required to None toreflect this more serious condition in their particular environment.

The list of possible values is presented in Table 13.

Table 13: Modified Base Metrics

Modified Base MetricCorresponding Values
Modified Attack Vector (MAV)Modified Attack Complexity (MAC)Modified Privileges Required (MPR)Modified User Interaction (MUI)Modified Scope (MS)Modified Confidentiality (MC)Modified Integrity (MI)Modified Availability (MA)

The same values as the corresponding Base Metric (see Base Metrics above), as well as Not Defined (the default).

For some purposes it is useful to have a textual representation of the numericBase, Temporal and Environmental scores. All scores can be mapped to thequalitative ratings defined in Table 14.[^3]

Table 14: Qualitative severity rating scale

RatingCVSS Score
Low0.1 - 3.9
Medium4.0 - 6.9
High7.0 - 8.9
Critical9.0 - 10.0

As an example, a CVSS Base Score of 4.0 has an associated severity rating ofMedium. The use of these qualitative severity ratings is optional, and there isno requirement to include them when publishing CVSS scores. They are intended tohelp organizations properly assess and prioritize their vulnerability managementprocesses.

The CVSS v3.1 vector string is a text representation of a set of CVSS metrics.It is commonly used to record or transfer CVSS metric information in a conciseform.

The CVSS v3.1 vector string begins with the label “CVSS:” and a numericrepresentation of the current version, “3.1”. Metric information follows in theform of a set of metrics, each preceded by a forward slash, “/”, acting as adelimiter. Each metric is a metric name in abbreviated form, a colon, “:”, andits associated metric value in abbreviated form. The abbreviated forms aredefined earlier in this specification (in parentheses after each metric name andmetric value), and are summarized in the table below.

A vector string should contain metrics in the order shown in Table 15, thoughother orderings are valid. All Base metrics must be included in a vector string.Temporal and Environmental metrics are optional, and omitted metrics areconsidered to have the value of Not Defined (X). Metrics with a value of NotDefined can be explicitly included in a vector string if desired. Programsreading CVSS v3.1 vector strings must accept metrics in any order and treatunspecified Temporal and Environmental as Not Defined. A vector string must notinclude the same metric more than once.

Table 15: Base, Temporal and Environmental Vectors

Metric GroupMetric Name (and Abbreviated Form)Possible ValuesMandatory?
BaseAttack Vector (AV)[N,A,L,P]Yes
Attack Complexity (AC)[L,H]Yes
Privileges Required (PR)[N,L,H]Yes
User Interaction (UI)[N,R]Yes
Scope (S)[U,C]Yes
Confidentiality (C)[H,L,N]Yes
Integrity (I)[H,L,N]Yes
Availability (A)[H,L,N]Yes
TemporalExploit Code Maturity (E)[X,H,F,P,U]No
Remediation Level (RL)[X,U,W,T,O]No
Report Confidence (RC)[X,C,R,U]No
EnvironmentalConfidentiality Requirement (CR)[X,H,M,L]No
Integrity Requirement (IR)[X,H,M,L]No
Availability Requirement (AR)[X,H,M,L]No
Modified Attack Vector (MAV)[X,N,A,L,P]No
Modified Attack Complexity (MAC)[X,L,H]No
Modified Privileges Required (MPR)[X,N,L,H]No
Modified User Interaction (MUI)[X,N,R]No
Modified Scope (MS)[X,U,C]No
Modified Confidentiality (MC)[X,N,L,H]No
Modified Integrity (MI)[X,N,L,H]No
Modified Availability (MA)[X,N,L,H]No

For example, a vulnerability with Base metric values of “Attack Vector: Network,Attack Complexity: Low, Privileges Required: High, User Interaction: None,Scope: Unchanged, Confidentiality: Low, Integrity: Low, Availability: None” andno specified Temporal or Environmental metrics would produce the followingvector:


The same example with the addition of “Exploitability: Functional, RemediationLevel: Not Defined” and with the metrics in a non-preferred ordering wouldproduce the following vector:


The CVSS v3.1 equations are defined in the sub-sections below. They rely onhelper functions defined as follows:

(Video) NVD, CVE, and CVSS Video

  • Minimum returns the smaller of its two arguments.
  • Roundup returns the smallest number, specified to 1 decimal place,that is equal to or higher than its input. For example, Roundup (4.02)returns 4.1; and Roundup (4.00) returns 4.0. To ensureconsistent results across programming languages and hardware, see Appendix Afor advice to Implementers on avoiding small inaccuracies introduced in somefloating point implementations.

Substitute Individual metrics used in equations with the associated constantlisted in Section 7.4.

7.1. Base Metrics Equations

The Base Score formula depends on sub-formulas for Impact Sub-Score (ISS),Impact, and Exploitability, all of which are defined below:

ISS =1 - [ (1 - Confidentiality) × (1 - Integrity) × (1 - Availability) ]
Impact =
If Scope is Unchanged6.42 × ISS
If Scope is Changed7.52 × (ISS - 0.029) - 3.25 × (ISS - 0.02)15
Exploitability =8.22 × AttackVector × AttackComplexity ×
PrivilegesRequired × UserInteraction
BaseScore =
If Impact \<= 00, else
If Scope is UnchangedRoundup (Minimum [(Impact + Exploitability), 10])
If Scope is ChangedRoundup (Minimum [1.08 × (Impact + Exploitability), 10])

7.2. Temporal Metrics Equations

TemporalScore =Roundup (BaseScore × ExploitCodeMaturity × RemediationLevel × ReportConfidence)

7.3. Environmental Metrics Equations

The Environmental Score formula depends on sub-formulas for Modified ImpactSub-Score (MISS), ModifiedImpact, and ModifiedExploitability, all of which aredefined below:

MISS =Minimum ( 1 - [ (1 - ConfidentialityRequirement × ModifiedConfidentiality) × (1 - IntegrityRequirement × ModifiedIntegrity) × (1 - AvailabilityRequirement × ModifiedAvailability) ], 0.915)
ModifiedImpact =
If ModifiedScope is Unchanged6.42 × MISS
If ModifiedScope is Changed7.52 × (MISS - 0.029) - 3.25 × (MISS × 0.9731 - 0.02)13
ModifiedExploitability =8.22 × ModifiedAttackVector × ModifiedAttackComplexity × ModifiedPrivilegesRequired × ModifiedUserInteraction

Note that the exponent at the end of the ModifiedImpact sub-formula is 13,which differs from CVSS v3.0. See the User Guide for more details of thischange.

EnvironmentalScore =
If ModifiedImpact \<= 00, else
If ModifiedScope isRoundup ( Roundup [Minimum ([ModifiedImpact + ModifiedExploitability], 10) ] × ExploitCodeMaturity × RemediationLevel × ReportConfidence)
If ModifiedScope isRoundup ( Roundup [Minimum (1.08 × [ModifiedImpact + ModifiedExploitability], 10) ] × ExploitCodeMaturity × RemediationLevel × ReportConfidence)

7.4. Metric Values

Each metric value has an associated constant which is used in the formulas, asdefined in Table 16.

Table 16: Metric values

MetricMetric ValueNumerical Value
Attack Vector / Modified Attack VectorNetwork0.85
Attack Complexity / Modified Attack ComplexityLow0.77
Privileges Required / Modified Privileges RequiredNone0.85
Low0.62 (or 0.68 if Scope / Modified Scope is Changed)
High0.27 (or 0.5 if Scope / Modified Scope is Changed)
User Interaction / Modified User InteractionNone0.85
Confidentiality / Integrity / Availability / Modified Confidentiality / Modified Integrity / Modified AvailabilityHigh0.56
Exploit Code MaturityNot Defined1
Proof of Concept0.94
Remediation LevelNot Defined1
Temporary Fix0.96
Official Fix0.95
Report ConfidenceNot Defined1
Confidentiality Requirement / Integrity Requirement / Availability RequirementNot Defined1

7.5. A Word on CVSS v3.1 Equations and Scoring

The CVSS v3.1 formula provides a mathematical approximation of all possiblemetric combinations ranked in order of severity (a vulnerability lookup table).To produce the CVSS v3.1 formula, the CVSS Special Interest Group (SIG) framedthe lookup table by assigning metric values to real vulnerabilities, and aseverity group (low, medium, high, critical). Having defined the acceptablenumeric ranges for each severity level, the SIG then collaborated with Deloitte& Touche LLP to adjust formula parameters in order to align the metriccombinations to the SIG's proposed severity ratings.

Given that there are a limited number of numeric outcomes (101 outcomes, rangingfrom 0.0 to 10.0), multiple scoring combinations may produce the same numericscore. In addition, some numeric scores may be omitted because the weights andcalculations are derived from the severity ranking of metric combinations.Further, in some cases, metric combinations may deviate from the desiredseverity threshold. This is unavoidable and a simple correction is not readilyavailable because adjustments made to one metric value or equation parameter inorder to fix a deviation, cause other, potentially more severe deviations.

By consensus, and as was done with CVSS v2.0, the acceptable deviation was avalue of 0.5. That is, all the metric value combinations used to derive theweights and calculation will produce a numeric score within its assignedseverity level, or within 0.5 of that assigned level. For example, a combinationexpected to be rated as a “high” may have a numeric score between 6.6 and 9.3.Finally, CVSS v3.1 retains the range from 0.0 to 10.0 for backwardcompatibility.

Simple implementations of the Roundup function defined in Section 7 are likelyto lead to different results across programming languages and hardwareplatforms. This is due to small inaccuracies that occur when using floatingpoint arithmetic. For example, although the intuitive result of 0.1+0.2 is0.3, JavaScript implementations on many systems return0.30000000000000004. A simple implementation of Roundup would round this upto to 0.4, which is counter-intuitive.

Implementers of CVSS formulas must take steps to avoid these types of problems.Different techniques may be required for different languages and platforms, andsome may offer standard functionality that minimizes or fully avoids suchproblems.

A suggested approach is for the Roundup function to first multiply its input by100,000 and convert it to the nearest integer. The rounding up should then beperformed using only integer arithmetic, which is not subject to floating pointinaccuracies. An example of pseudocode for such an implementation is:

  1. function Roundup (input):
  2. int_input = round_to_nearest_integer (input * 100000)
  3. if (int_input % 10000) == 0:
  4. return int_input / 100000.0
  5. else:
  6. return (floor(int_input / 10000) + 1) / 10.0

The floor function on line 6 represents integer division, i.e., the largestinteger value less than or equal to its input. Many programming languagesinclude a floor function as standard.

Line 3 checks if the least significant four digits of the integer are allzeroes, e.g., an input of 1.200003 would be converted by line 2 into 120,003,making the result of the modulo operation 0 and therefore the if statementcondition is true. If true, no additional rounding is required. If false,the integer is incremented by 0.1 before being returned, though line 6 performsthis on numbers ten times bigger than the result will be in order to use integerarithmetic.

FIRST sincerely recognizes the contributions of the following CVSS SpecialInterest Group (SIG) members, listed in alphabetical order:

  • Adam Maris (Red Hat)
  • Arkadeep Kundu (Dell)
  • Arnold Yoon (Dell)
  • Art Manion (CERT/CC)
  • Bruce Lowenthal (Oracle)
  • Bruce Monroe (Intel)
  • Charles Wergin (NIST)
  • Christopher Turner (NIST)
  • Cosby Clark (IBM)
  • Dale Rich (Depository Trust & Clearing Corporation)
  • Damir 'Gaus' Rajnovic (Panasonic)
  • Daniel Sommerfeld (Microsoft)<- Darius Wiles (Oracle)
  • Dave Dugal (Juniper)
  • Deana Shick (CERT/CC)
  • Fabio Olive Leite (Red Hat)
  • James Kohli ️(GE Healthcare)
  • Jeffrey Heller (Sandia National Laboratories)
  • John Stuppi (Cisco)
  • Jorge Orchilles (Citi)
  • Karen Scarfone (Scarfone Cybersecurity)
  • Luca Allodi (Eindhoven University of Technology)
  • Masato Terada (Information-Technology Promotion Agency, Japan)
  • Max Heitman (Citi)
  • Melinda Rosario (SecureWorks)
  • Nazira Carlage (Dell)
  • Rani Kehat (Radiflow)
  • Renchie Abraham (SAP)
  • Sasha Romanosky (Carnegie Mellon University)
  • Scott Moore (IBM)
  • Troy Fridley (Cisco)
  • Vijayamurugan Pushpanathan (Schneider Electric)
  • Wagner Santos (UFCG)

FIRST would also like to thank Abigail Palacios and Vivian Smith from ConradInc. for their tireless work facilitating the CVSS SIG meetings.

  • CVSS main page -
    The main web page for all CVSS resources, including the most recent version ofthe CVSS standard.

  • Specification Document -
    The latest revision of this document, defining the metrics, formulas,qualitative rating scale and vector string.

  • User Guide -
    A companion to the Specification, the User Guide includes further discussion ofthe CVSS standard including particular use cases, guidelines on scoring, scoringrubrics, and a glossary of the terms used in the Specification and User Guidedocuments.

  • Examples Document -
    Includes scores of public vulnerabilities and explanations of why particularmetric values were chosen.

  • Calculator -
    A reference implementation of the CVSS standard that can be used for generatingscores. The underlying code is documented and can be used as part of otherimplementations.

  • JSON and XML Schemas -
    Data representations for CVSS metrics, scores and vector strings in JSON Schemaand XML Schema Definition (XSD) representations. These can be used to store andtransfer CVSS information in defined JSON and XML formats.

    (Video) Scoring Security Vulnerabilities in Medical Devices: Rubric for CVSS


What is the CVSS v3 1 scoring system? ›

Table 14: Qualitative severity rating scale
RatingCVSS Score
Low0.1 - 3.9
Medium4.0 - 6.9
High7.0 - 8.9
Critical9.0 - 10.0
1 more row

What is the difference between CVSS and CVSS v3? ›

Cisco conducted a study on this topic and found that the average base score increased from 6.5 in CVSSv2 to 7.4 in CVSSv3. This means that the average vulnerability increased in qualitative severity from “Medium” to “High.” The same study concluded that far more vulnerabilities increased in severity than decreased.

What is the maximum CVSS score possible this is one with a possible complete loss of confidentiality integrity and availability? ›

CVSS score 9.8 vs 10.0

It is possible to get a CVSS score of 10.0 only if the scope is changed. At the same time, the highest possible score when the scope is unchanged is 9.8. This is when all impact scores are high and all exploitability metrics are most severe.

What is a good CVSS score? ›

A CVSS score is composed of three sets of metrics (Base, Temporal, Environmental), each of which have an underlying scoring component.
CVSS Qualitative Ratings.
CVSS ScoreQualitative Rating
7.0 – 8.9High
9.0 – 10.0Critical
3 more rows

How the CVSS score is calculated? ›

CVSS scores are calculated using a formula consisting of vulnerability-based metrics. A CVSS score is derived from scores in these three groups: Base, Temporal and Environmental. Scores range from zero to 10, with zero representing the least severe and 10 representing the most severe.

How do you read a CVSS score? ›

Vulnerability Scoring System: CVSS Rating Methodology

0.1-3.9 = Low. 4.0-6.8 = Medium. 7.0-8.9 = High. 9.0 - 10.0 = Critical.

What are the factors of CVSS v3? ›

CVSS consists of three metric groups: Base, Temporal, and Environmental. The Base metrics produce a score ranging from 0 to 10, which can then be modified by scoring the Temporal and Environmental metrics.

Is CVSS the same as CVE? ›

Differences between CVSS and CVE

CVSS is the overall score assigned to a vulnerability. CVE is simply a list of all publicly disclosed vulnerabilities that includes the CVE ID, a description, dates, and comments. The CVSS score is not reported in the CVE listing – you must use the NVD to find assigned CVSS scores.

What is the highest CVE score? ›

Scores range from 0 to 10, with 10 being the most severe. While many utilize only the CVSS Base score for determining severity, temporal and environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively.

What are the limitations of the common vulnerability scoring system CVSS? ›

The Limitations of the CVSS Framework

CVSS base scores only represent the severity of a vulnerability. They do not consider the risk that severity poses to your specific environment or provide a true cyber risk score. Without that key risk context, it is possible to prioritize vulnerability remediation effectively.

Do CVSS scores change over time? ›

No matter what an adversary, vendor, or enterprise does, the CVSS base score does not change. When looking up a CVSS score for a vulnerability in a third party system like NIST's National Vulnerability Database, the reported score is almost always the CVSS Base Score.

What is a bad CVSS score? ›

A CVSS score of 0.1 to 3.9 earns a severity rating of Low; from 4.0 to 6.9 gets a Medium rating; 7.0 to 8.9 is rated High; and 9.0 to 10 is Critical.

Is CVSS still valid? ›

Most cybersecurity teams rely on the Common Vulnerability Scoring System (CVSS) to prioritize their vulnerability remediation efforts. But, they fail to realize that CVSS is an outdated, ineffective method that causes them to waste the majority of their valuable time on vulnerabilities that pose little to no risk.

How do you calculate vulnerability risk? ›

The enhanced risk formula, Risk = Criticality (Likelihood × Vulnerability Scoring [CVSS]) × Impact, is demonstrated to result in more effective and accurate risk ratings, which are derived from the three dimensions (likelihood, vulnerability scores and impact).

What is CVSS for Log4j? ›

Log4j is a software library built in Java that's used by millions of computers worldwide running online services. It's described as a zero-day (0 day) vulnerability and rated the highest severity under the Common Vulnerability Scoring System (CVSS; CVE-2021-44228).

Is CVSS a threat model? ›

CVSS is a standardized threat scoring system used for known vulnerabilities. It was developed by the National Institute of Standards and Technology (NIST) and maintained by the Forum of Incident Response and Security Teams (FIRST).

Does every vulnerability have a CVE? ›

CVE stands for Common Vulnerabilities and Exposures. It is the database of publicly disclosed information on security issues. All organizations use CVEs to identify and track the number of vulnerabilities. But not all the vulnerabilities discovered have a CVE number.

Who assigns CVSS scores? ›

The National Vulnerability Database (NVD) provides specific CVSS scores for publicly known vulnerabilities. Federal agencies can use the Federal Information Processing Standards (FIPS) 199 security categories with the NVD CVSS scores to obtain impact scores that are tailored to each agency's environment.

What is a CVE rating of 10? ›

What is the Common Vulnerability Scoring System (CVSS)
1 more row

Who assigns CVE scores? ›

CVEs are assigned by a CVE Numbering Authority (CNA). While some vendors acted as a CNA before, the name and designation was not created until February 1, 2005. there are three primary types of CVE number assignments: The Mitre Corporation functions as Editor and Primary CNA.

What range does in the common vulnerability scoring system CVSS v3 1 severity ratings? ›

Table 14: Qualitative severity rating scale
RatingCVSS Score
Low0.1 - 3.9
Medium4.0 - 6.9
High7.0 - 8.9
Critical9.0 - 10.0
1 more row

What is exploitability score for a vulnerability in CVSS? ›

In CVSS v2, the exploitability subscore represents metrics for Access Vector, Access Complexity, and Authentication. The subscore measures how the vulnerability is accessed, the complexity of the attack, and the number of times an attacker must authenticate to successfully exploit a vulnerability.

Does Nessus use CVSS? ›

Nessus analysis pages provide summary information about vulnerabilities using the following CVSS categories. The plugin's highest vulnerability CVSSv2 score is 10.0. The plugin's highest vulnerability CVSSv3 score is between 9.0 and 10.0.

What is the current version of CVSS? ›

CVSS is currently at version 3.1.

Is Qualys rating the same as CVSS? ›

Generally Qualys Severity Scores, vendor severity scores, and CVSS Scores will be congruent. This is not always apparent as the Qualys Severity Score is the vendor score, with CVSS Base Score, and normalized into 1-5. For example Microsoft and RedHat have 4 levels of severity and CVSS 10 levels.

What are the CVSS scoring tiers? ›

Textual severity ratings of None (0), Low (0.1-3.9), Medium (4.0-6.9), High (7.0-8.9), and Critical (9.0-10.0) were defined, similar to the categories NVD defined for CVSS v2 that were not part of that standard .

What does CVSS score mean? ›

The Common Vulnerability Scoring System (CVSS) provides a way to capture the principal characteristics of a vulnerability and produce a numerical score reflecting its severity.

What is CVSS v2 scoring system? ›

Version 2.0

CVSS consists of 3 groups: Base, Temporal and Environmental. Each group produces a numeric score ranging from 0 to 10, and a Vector, a compressed textual representation that reflects the values used to derive the score. The Base group represents the intrinsic qualities of a vulnerability.

What is the highest vulnerability severity level? ›

Severity Levels
9.0 - 10.0Critical
7.0 - 8.9High
4.0 - 6.9Medium
0.1 - 3.9Low

Does every vulnerability have a CVSS score? ›

The National Vulnerability Database (NVD) provides CVSS scores for almost all known vulnerabilities. The NVD supports both Common Vulnerability Scoring System (CVSS) v2.0 and v3.X standards. The NVD provides CVSS 'base scores' which represent the innate characteristics of each vulnerability.

What does CVSS 10.0 mean? ›

CVSS Score Qualitative Ratings:

0.0 – No severity. 0.1 – 3.9 – Low severity. 4.0 – 6.9 – Medium severity. 7.0 – 8.9 – High severity. 9.0 – 10.0 – Critical Severity.


1. CVSS Scoring
(Andrew Kotynski)
2. RVAs3c: Seth Hanford - CVSS v3 -- This One Goes to 11
3. TridiumTalk: Specifier Series - Writing Cybersecure Specifications for Buildings (October 12, 2022)
4. What is PrintNightmare Vulnerability (CVE-2021-34527) - ALL YOU NEED TO KNOW
(Windows Love)
5. Ethical Hacking Course: What is CVSS Common Vulnerability Scoring System |Craw Cyber Security
6. How to document Penetration Testing Findings
(Chris Dale)


Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated: 20/10/2023

Views: 5668

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.