Cybersecurity risks are a result of three elements: threat, vulnerability, and impact (DBIR 2018 p49). This blog specifically addresses vulnerability management. Patching is an important aspect of this, but there is more to it than meets the eye. As always in information security, we have to balance the risk and the cost.
In the definition of risk provided above, a vulnerability does not have to be some technical defect. If the threat is a phishing e-mail, we could consider the user opening the attachment and allowing the macros to run a vulnerability. We could manage these human vulnerabilities, but vulnerability management usually entails management of software defects and configuration errors.
Software defects or errors in configuration exist in many places and in many forms. When the defect imposes a risk, like the possibility of bypassing an authentication step to access confidential data, we identify it as a vulnerability.
One primary example of a software defect is the Internet vulnerability which allows an attacker to download a file with the use of a prepared .mht file (published 12th April). An example of configuration error is the selection of allowed cryptographic methods for any communication channel. SSL should not be configured to use weak ciphers like DES/3DES and RC4.
In the case of software defects, applying a patch may solve the issue; however, in the case of a misconfiguration no patching is necessary and the configuration simply must be adjusted. Addressing both problems is vulnerability management, consisting of the following high-level process steps (CIS):
- Vulnerability Notification through becoming aware of disclosed vulnerabilities and performing security assessments.
- Vulnerability Identification through manual or automated scanning of technologies throughout the organization.
- Vulnerability Remediation & Mitigation through application of patches, adjustment of configurations, modification of systems, or acceptance of risk.
The first step is becoming aware of the various vulnerabilities that are known. To get this information you could monitor the publications of all software vendors or you could monitor the Common Vulnerability and Exposures (CVE) databases. This works well for common, off the shelf applications, but for bespoke software it doesn’t and vulnerabilities must be identified by performing code analysis, architectural review or penetration testing. Getting hacked is another possibility to learn about present vulnerabilities, but this option is usually to be avoided.
The second step is to check if any of the vulnerabilities apply to your situation. Are you operating the software involved? Is it configured in a way that causes vulnerabilities? You have to be well aware of your infrastructure in order to be able to evaluate this. You need an up-to date configuration management database (CMDB).
Performing these tasks by hand is a tedious job. Part of it could be automated using vulnerability scanners like Qualys, Nessus or Outpost24. These scanners are fed with CVE databases and, in some cases, are able to detect common problems like cross side scripting in bespoke applications. The scanner then scans the network to test for problems and generates a report. The scanner will classify each finding which gives an indication to the severity. Some scanners will indicate the ease of exploitation, e.g. is there an exploit available in the popular Metasploit framework.
Severity ratings are often determined by the CVSS v3 scoring system and can commonly be found in reference systems such as CVE. Severity ratings for vulnerabilities are along several dimensions with Base Scores derived from exploitability factors (such as attack complexity) and impact factors (such as confidentiality, integrity and availability impact).
CVSS Base scores can be expressed in a 0-10 range, commonly summarized as:
- "Low" severity if they have a CVSS base score of 0.0-3.9
- "Medium" severity if they have a CVSS base score of 4.0-6.9
- "High" severity if they have a CVSS base score of 7.0-10.0
Another advantage of using a scanner is that most of them are able to check compliance of hardening baselines, and may report on PCI-DSS compliance.
The last step is to evaluate the vulnerabilities found. It is very common for a vulnerability scanner to report hundreds of vulnerabilities in a large network. Mitigating all risks would be impossible given the limited resources. Evaluation of the vulnerability treatment is a manual procedure that requires a good understanding of the vulnerability itself and the environment in which it is found. A strategy which is often used is:
- identify false positives and remove them from the list
- identify and solve disasters to happen as quickly as possible
- identify low hanging fruit and solve quickly
- handle the remaining issues
The final step of the strategy will obviously be the largest, which is why we must prioritize according the severity of the issue. How do we estimate the risk and what is an acceptable risk? How do we treat the vulnerabilities? Do we solve them per host, or per vulnerability? What to do with 0-days, where the vulnerability is known, but a solution is not? Do we automate the patching process?
Some additional items to take into consideration
- What’s the exploitability? Some vulnerabilities have ready-made exploits available, while others are only exploitable in theory. The majority of attacks are not targeted and are carried out by criminals looking for a way to make easy money. This group often uses available exploits. When the attacker is a nation state looking after highly confidential, valuable, intellectual property he (or she) may use an exploit never seen before. Depending on the attacker patching would be advisable.
- Some vulnerabilities result in disclosure of information; some affect the availability (Denial of service). A denial of serviced will get noticed, data leakage is much harder to detect. Depending on what we consider the biggest problem is- confidentiality or availability -we discern whether patching is needed or not.
- It might be a good idea to treat workstations different than servers. On workstations a user is interacting with the system and is likely to get tricked into running software (phishing). Products like Windows, Adobe and Java need frequent patching. Although the roll-out of every patch is a risk itself, because things could break, it may be less risky to patch the workstations automatically. For servers and network equipment it would be wise to always test patches before applying them in production.
- Patching introduces risk. There are examples of patches that break the system or solve a vulnerability, but introduces a new one.
- In addition to patching, there are alternative ways to mitigate vulnerabilities. Next generation firewalls with Intrusion Prevention capabilities are often capable of detecting and blocking exploitation of vulnerabilities. In my experience the firewall vendors are even quicker with implementing this than the software vendors are with supplying a patch. A prerequisite is the firewall has to be configured for SSL-offloading, otherwise it cannot evaluate the encrypted traffic. Sometimes blocking a filetype (like mht in the example above) can mitigate most of the risk.
To make a decision on what to patch and what not, you need to have a good understanding of the IT environment, the business and the implications of the vulnerability and the patch. The way patches are applied is no different than any other change and therefore patches are handled via the change management process.
It is the CISO’s responsibility to make sure a working vulnerability management process is in place. The first steps in this process, scanning and a first selection, can be outsourced to a security company. The final risk based decisions should be made by the company itself.
The best strategy to tackle vulnerabilities is to avoid them. This can be achieved by simplifying the IT infrastructure and removing legacy applications. Migrating to SAAS solutions shifts the responsibility. Choosing a provider with a good reputation when it comes to security, may reduce your vulnerability management efforts.
- DBIR 2018 and CIS Security Metrics v1.1.0
This is the second installment of a new monthly blog post series, ‘CISO Says...’ by Chris van den Hooven, Senior Security Consultant at Nixu. Each post will elaborate on a different issue within the cyber security space from the perspective of a Chief Information Security Officer, a role Chris has been in many times himself, in a career spanning more than 15 years. By combining knowledge of risk management, architecture, legislation and regulation, Chris helps organizations get in control of the security of their information and IT infrastructure.