Things that security auditors nag about, part 5: No security requirements

Anne Oikarinen

Anne Oikarinen

Senior Security Consultant

Marraskuu 2, 2018 at 14:04

This is the 5th part of our blog series "Things that security auditors will nag about and why you shouldn't ignore them". In these articles, Nixu's security consultants explain issues that often come up when assessing the security of web applications, platforms and other computer systems.

Once in a while in my security testing assignments, I come across products or systems that are full of security holes. It’s entertaining for a moment, but after having a full OWASP top 10 with XML External Entity injections, SQL injections, and bypassing all access controls, I’m beginning to feel that I want to cry instead.

But don’t start blaming developers or testers yet. Yes, not knowing about secure coding practices or the lack of time for proper testing can introduce vulnerabilities. However, the biggest weaknesses arise from not paying attention to security in the design phase.These security flaws are also the hardest to fix. We’re talking about the lack of security requirements.

It was ten years ago when the security requirements chapter in product documentation was briefly handled along the lines of “The system must be secure”. Where’s the continuous improvement when we're talking about security? Come on, it’s 2018. Even if the word ‘requirement’ might bring you back some nasty memories of waterfalls, you still need them.

Why? Because skipping security in the design phase can be really problematic. Adding late fixes for vulnerabilities may add unnecessary complexity to code, making the software more difficult to maintain – a security threat in itself. Lack of security controls may require you to redesign the software, retest and fix all the new bugs you created while at it, and lose precious time and money. In very bad cases, fixing all the design errors and vulnerabilities would require so much work that it’s just better to scrap the whole thing and start over.

What does ‘secure’ mean anyway?

OK, now you know that you need security requirements. But what is security, anyway? Confidentiality? Integrity? Availability? All of them?

When you’re planning an application, a new feature to it, or a complete computer system, you should stop and think for a minute: what are the important things that you need to protect?

Is it your marketing and customer database? Is it the secret sauce recipe that gives you a competitive advantage? Is it the data that your end-users have entered and trusted in your hands? Is it the availability and resources of your servers? Probably you want to use your servers for running your applications, not for mining cryptocurrency for cyber criminals.

Also, think about your user base for a minute. Are there different user groups that are allowed to do different things in your application? Probably you want to make sure that more permissive functions are only available to the right people and you can trace the actions. How many users are likely to use the application simultaneously? What about the impacts?What does it mean if they can’t access the application? Is it a minor annoyance, does the daily work stop, or do your customers start looking for another place for doing their online shopping? What if someone tampers with the information? Thinking these through provides you guidelines to building your security requirements.

In addition, you should map out your attack surface. What are interfaces and APIs you are exposing? There's more than the user interface that meets the eye. There's at least admin access to the servers for maintenance. There can be numerous integrations to other systems: some internal, some external. For example, internal integrations might include your email system, AD, databases, or your invoicing system. Third party SSO login, payment systems, analytics, and customer service chats are some examples of external integrations. How do you authenticate the source of your data, check that the data is meaningful, what do you do if the integrated API does not respond? Even if you are accepting incoming data only from internal systems, you cannot blindly assume that they will always provide valid, meaningful data and never misbehave.


Use evil user stories to find application-specific requirements

To find out what is important in your application's case, ask yourself the questions: what bad things could happen? What would motivate an attacker? In the beginning, you’ll probably quickly come up with scenarios like script kiddie buying a DDoS botnet, someone guessing the passwords, phishing, or seeing someone else’s data. After a while, it can get a bit difficult to think about all the bad things that a skilled attacker could be capable of. That’s why you can come up with new threat scenarios by reversing the question. Instead of trying to get into the head of a cybercriminal, complete the following sentence: An attacker should not be able to...

For example:

  • The user should not be able to buy products from the online store without paying.
  • An attacker should not be able to crash the website even if there are multiple concurrent users.
  • The user should not be able to send a message as someone else.
  • The user should not be able to add other files than images to their profile.

These so-called evil user stories are highly dependent on the features your application has. Don't just stop here: you should put these to your backlog. Investigate what security controls can be used as mitigations and add them as the acceptance criteria of that story.

You can also use this technique of thinking what shouldn't be possible, later on for creating security-related test cases.



Use standard requirement sets

If you’re still unsure about what kind of security requirements you should aim for, there are standards to help you. These materials will also go to a detailed level to make sure you don't miss something. For example, concerning authentication maybe you remembered multi-factor authentication, but did you remember that password reset or login features shouldn't allow enumerating users?

Web applications

If you’re developing web applications, take a look at OWASP Application Security Verification Standard (ASVS). To help you to select the requirements that make sense to your application, there are three levels:

Level 1 is for simple applications where confidentiality is not important and there is little motivation for attacking the application. The security controls protect from easily discoverable vulnerabilities.

Level 2 is something most applications should aim for. Implementing those controls protects from most software risks, such as bypassing authentication, invalid access control, information closure, injection flaws and input validation errors and so on. The list of requirements is detailed enough so it’s easy to cover special cases as well.

Level 3 is for applications that handle more business-critical functions or health data. Meeting those requirements also means that you need to take security into account in every phase of your software development, build layers of security and document your efforts.

ASVS is not just a list of demands: there are cheat sheets and code examples about implementing those requirements. There’s also a comprehensive OWASP Testing Guideto help you verify that your implementation of those requirements works the way it should.

OWASP top 10 is the list of most common web application flaws. It’s a small subset of ASVS but if you’re in a hurry, it’s a good idea to check at least those.

Mobile applications

Mobile applications have their own requirement set, Mobile Application Security Verification Standard (MASVS). Many requirements and concepts are similar if you’re already familiar with web applications. However, things like resiliency to reverse engineering and tampering and the importance of protecting or minimizing locally stored data are something to pay special attention to. The OWASP Mobile Security Testing Guide complements the requirements really nicely providing testing instructions both for iOS and Android and some implementation examples.

Something else?

If you’re not designing web or mobile applications, it doesn’t mean that you cannot utilize ASVS for selecting security requirements. You can be creative. So cross-site scripting or insecure HTTP headers are out of the question now, but you can still think about all the different input validation flaws, for example, operating system command or log injection. And if you’re having multiple users and user groups, requirements about access controls and authentication still apply. If your application is connected to a network, transport security must be ensured.

For the Internet of Things, there’s also a top 10 list for IoT devices from 2014. When comparing to the most recent IoT device vulnerabilities, things haven’t changed very much so I would encourage IoT developers to take a look of that list too. The newest addition to the IoT security guidelines is the code of practice for consumer IoT security.

Creating security requirements is not that difficult after all

So, when you’re planning a new application, a new feature to it, or a complete computer system, remember these three things:

  1. What are the information and resources you want to protect?
  2. Know your attack surface. What kind of user interfaces, APIs, and integrations to other systems you have?
  3. Use evil use stories to find out, what needs to be secured in your application’s case.
  4. Use standard security requirements. Pick an ASVS target level and aim for fulfilling these requirements.
  5. Put security controls to your backlog so they will be implemented along with the other features.

Selecting and creating security requirements doesn’t sound that complicated anymore, does it

Related blogs