Things that security auditors will nag about, part 4: Will you detect a breach?

Helmikuu 13, 2018 at 15:16

This is the 4th part of our blog series "Things that security auditors will nag about and why you shouldn't ignore them". In these articles, Nixu's security consultants explain issues that often come up when assessing the security of web applications, platforms and other computer systems.

WS

In threat workshops, asking about logging and monitoring practices is the place when you usually get some embarrassed grins. Usually, multiple technical audit findings, such as the lack of remote logging or monitoring software, support the conclusion that the capabilities for detecting a security breach are not on the level that they should.

OWASP top 10 version 2017 includes Insufficient Monitoring & Logging as a new common blunder which has caused some controversy among security people and developers. You can hear a big applause from the incident response crowd, as insufficient logging is something that may stop forensics on its tracks immediately. However, this requirement may be a horror for many since it is hard to get right and requires something more than just putting technical measures in place.

Someone from our team will read the logs every week
 

Often logs are used only for debugging purposes: someone takes a look only when something isn't working properly.  And as your piece of quality software always works as it should, there's no need for reading logs after most development has been done. Right? So too often, servers and other assets are left alone to rot after the most active development phase is over.

Sometimes during audits, I hear someone claiming that they periodically read the logs from the previous week. I seriously doubt that. I'm not saying I don't believe them - it's just that reading logs is a tedious and boring job mostly, especially if there's a weeks worth of them. The same way you can quickly "read"  a blog post like this and not remember anything afterward, you can just skim through the data and really easily miss something.

A better approach is to configure automatic alerts from certain suspicious log events and host activity for example with syslog, SNMP or SNMP traps. Then based on this, you can read the logs around these events. Remember to use encrypted transport mechanisms and authentication to avoid accidental information disclosure.

What counts as suspicious?

So what counts as suspicious activity? Unfortunately, that depends on the target system. Some of the symptoms you probably want to configure as alerts might turn out to be false positives.

Again, the often repeated phrase "know your assets" really helps. Find out what kind of errors your applications produce in security-related error situations and monitor these error cases. Check the documentation, ask the developers or test. If your production logs suddenly look the same as the logs during an audit, someone not-so-whitehat might be knocking!

In general, symptoms of something fishy happening include:

  • Unusual amount of requests within a time frame.
  • Large amount of failed logins from a single user.
  • Large amount of failed logins for multiple users from same IP address - although this can be really tricky because of NAT implementations.
  • Someone logging in at strange hours or strange locations. Pay attention to admin accounts. Of course, sometimes locations and times vary because of business trips and timezone changes, or working overtime and may cause false positives.
  • Strange input patterns and suddenly, a lot of input validation failures.
  • Sudden peaks of network activity.
  • Traffic to/from unusual ports. What's unusual, depends of course on the set of services you provide. Skype, and various instant messaging software can cause a lot of false positives.
  • Traffic to blacklisted or known malware domains and URLs
  • Strange TLS certificates used in TLS connections.
  • Changes in DNS records.
  • Sudden peaks in CPU or memory consumption.
  • Lost connection to monitored hosts.
  • New files appearing to your file system in certain directories. If the server's purpose is to be a file server or multi-user web server, this causes a lot of false positives.
  • New files with .js, php, or .html extensions appearing which use base64_decode, eval, preg_replace, substr, gzinflate, or similar functions often used by exploit kit type of malware. Grepping for these regularly is a good idea.
  • High entropy in .js, php, or .html files often implies obfuscated code or base64 encoded data blobs. Minified JavaScript files often cause false positives.
  • Weird or non-existing User Agents and other headers. This is a very likely source of false positives, so creating alerts based on this is not a good idea. However, this can be sometimes used as a last of line of checks.

Port scanning, while it can be a sign of reconnaissance, is pretty much the norm these days and usually counts as background noise. Usually, you don't have to get too alarmed about port scanning although ignoring it completely is not wise, either.

Configuring alerts can be difficult: it is a balancing act between a "better safe than sorry" approach and getting security fatigue by drowning in a pile of alerts. You need to be able to spare some time for investigating and learning how to fine-tune the alert threshold to a suitable level.

Steps towards better detection capabilities

Most importantly, you need a process and people to run that process. No security product will save you, if you never look at the logs, never check the cause for an alert, or don't know what to do when you suspect a breach.

The technical solutions for enhancing detection can be a combination of the following:

  1. Centralized and correlated logging on sufficient detail level. You can read more about logging and security in my blog post Things that security auditors will nag about, Part 1: #NoLogs.
  2. Configure log event alerts of important stuff such as failures.
  3. SNMP monitoring or SNMP traps to monitor for example system performance, network usage, network outage, or software-specific error situations. Needless to say that you also need to review the alerts. See if an email is enough for you or do you need SMS notifications of high priority alerts.
  4. System auditing to monitor successful and failed logins, file access, and monitoring, certain user activity, etc.
  5. File system integrity checking to ensure your configuration contain only authorized modifications. You need to update the integrity check tool's database when you actually make changes and restrict the files and folders to check or you'll end up with an insane amount of alerts. You can also use operating system tools like find and grep to look for suspicious file contents.
  6. Web Application Firewall - but beware of forgetting it in passive mode and never touching it again. Some legitimate monitoring tools may cause false positive alarms and blocking, so you need time for fine-tuning the WAF.
  7. Intrusion Detection System or Intrusion Prevention System. Using a centralized system with host agents spread all over your network gets the most coverage but the setup and configuration can take some time. Prioritize and start with your most important servers. As with Web Application Firewalls, running an IDS in passive mode and never taking a look at the alerts is a waste of money and effort.

Depending on the system you are protecting, you may not need all of the above. There are numerous software and appliances available that can do some or all of the required stuff you, both commercial and open source. Many suitable tools are readily available on the operating system or in your cloud environment and you just need to enable and configure them.

Remember: detection is not only about buying security products and placing them on your network perimeter. The technical solutions are there for collecting the information and creating alerts - you need people to react and to investigate.

Related blogs