During the past 12 years or so, I've been helping many organizations and teams to build secure software systems. Here are my thoughts about it.
Before discussing how do you actually bring security into software development work, I think it's best to clarify the goal. In my opinion, the goal is simply to get teams to create secure systems. But what does it mean? It means that
- we can trust (to some extent) that security is properly addressed in each of the systems,
- we know how secure the systems are, and
- we know what risks are involved in them.
Secure systems have sufficient security controls, are resilient to attacks, and make it possible to detect and recover from security incidents.
Where do teams typically fail?
How do teams fail to address security in development projects? I've observed these main reasons:
Not understanding how the system can be attacked. If you don't know what can happen, you can't count on attacks being prevented or even detected. In addition to the exposed technical parts of the system, you need to consider the security of the processes, business logic, and data handled by the system. The problem can lie in an insecure systems architecture or a flawed process design.
Implementing insecure software. Even if the design of the system is secure, the implementation may not be. Technology choices are essential. The more you can rely on existing current technology to take care of security, the better. You also need to know how to use the tech. Everything the team implements as custom code is prone to have vulnerabilities. This means that the team must understand the most important technical security risks, such as OWASP Top 10, and what does it mean specifically for their technology. More often than not, the technology (all the 3rd party components and frameworks) already provides some of the technical controls, but not all.
Having insecure development and deployment mechanisms. If you can't protect the integrity of your source code, libraries, and deployment, you can't keep the system secure. Source code and binary repositories must be secured and have proper access control. Builds and deployment to production must have a proper audit trail and integrity protection. Test data and cryptographic keys must be protected from unauthorized access.
Not doing the security work you need. If you have a secure design, secure implementation, and secure delivery mechanism for the system, isn't that enough? Not quite. How do you even know that secure design and implementation has been done? There should be supporting, measuring, and verifying activities involved, all run by security processes embedded in the software development process. Most importantly, you should have visibility to the residual security risk, which inevitably remains for all systems even after everything feasible has been done.
Seven tips on how to address these concerns?
Threat analysis. Start doing threat analysis, even just stopping for 15 minutes after each sprint planning, to think which of the stories might actually have a security impact and what are the assets you need to protect. Maintaining an understanding of the attack surface of the system is also very helpful. For threat modeling, STRIDE or attack trees work fairly well but quickly get out of hand if you're not careful.
Secure technology choices. Discover what tech is in use and how they support technical security, for example, regarding input validation, output encoding, encryption, logging, etc. Also, make sure the tech is recent enough and up-to-date versions are (and will be) in use. Know what interfaces or services the components expose and how they are configured securely. This also applies to cloud security platforms (to the extent dev teams manage them).
Technical security guidelines and requirements. The more you can rely on things implemented the same way across different systems and products, the better. This should cover not only the technology choices but also the overall security design (for example when and what encryption to use), and security control implementation (such as authentication, logging, and key management). Having a common set of practical security requirements is a good way to get things going for different teams, and obvious if you need to comply with a security standard.
Automated scanning. Security implementation mistakes will be made eventually, so it is best to catch them as soon as possible when they are least expensive to fix. Automated code scanning (SAST), dynamic scanning (DAST), and component vulnerability scanning (SCA) provide concrete observations of the implementation. There are good open-source and commercial tools for this purpose that should be integrated into a CI system. Just be aware that there will be a good number of false positives initially that you need to review before getting a continuous benefit of the scans. Getting the results to teams' backlogs is very important to organize (for example, by using AVC technology).
Secure development process. Without a process or other commonly followed practices, no work is done consistently. A secure development process shouldn't exist as a separate entity but be included in the overall software development process and practices. There are different models for this, but including security into the already existing Definition of Ready (DoR) and Definition of Done (DoD) seem to be a natural way to get started. If anything, it's useful to ensure at least these 4 steps are considered in the development process: 1) identify threats, 2) determine security controls (requirements), 3) verify security, 4) assess and accept residual risk.
Security champions. Security champions are persons in teams who act as the "security conscience" for the team and can raise a flag when potentially insecure decisions are made. They can work with security people to get things properly addressed. Their role is not to be accountable or responsible for security, but to have more knowledge, good visibility to the team's work, and help the team succeed in its security responsibilities.
Training and awareness. To know how to identify and avoid security problems, teams must know about them. They can learn from a selected set of relevant OWASP ASVS items, but best they learn by doing. Looking at actual vulnerabilities in their code and fixing them together is a good way to train people. You can also utilize online learning platforms for secure development, which have secure coding exercises and other content. Just make sure they align with your technology choices.
There are many other things you can do to improve the security of software development, but this list gets you started quite well.
One key thing is to treat security as one aspect of quality and not an isolated additional attribute to the system. When security is a part of quality, it's often received well by competent developers who want to deliver good quality solutions.
Want to keep track of what's happening in cybersecurity? Sign up for Nixu Newsletter.