From the course: Dynamic Application Security Testing

Manual vs. automated testing

- [Instructor] Application security testing procedures should involve a combination of manual hands-on tests and automated scans. It's up to you, though, to strike the right balance between the two. Far too often we jump right into automated scans to figure out where our apps might be exposed. In my experience, this approach is severely limiting. If you want to find the right balance between manual and automated dynamic tests, you should start by conducting static application security tests. Review all the available documentation on both the app and the organization's security requirements. Review the results of the latest source code security reviews. Review the outcome of the static test conducted to measure the code against the OWASP top 10. Use all that information to build a foundational understanding of the app. If you do, you'll be able to identify relevant tools and techniques for dynamic testing much more quickly Take OWASP ZAP, for example. In the options menu one of the settings you can modify is the list of globally excluded URLs. These are the URLs that you don't want the scanner to interact with. If you spend time reviewing the documentation on your application's administrative features, then you could fine tune your scan to avoid breaking the app on its first run. Likewise, you could upload a list of known URLs in the forced browse option to ensure that you're scanning the entire application and that you're not missing any URLs that aren't immediately apparent to an end user. And these are just a couple of quick wins in ZAP. If your source code security reviews revealed that your developers are struggling to defensively code against SQL injection attacks, then you might use a tool like SQLMAP in your online testing activities in addition to using ZAP. That way you're able to spend your dynamic testing time wisely, focusing on risks that came to light as a result of your static testing efforts. With dynamic testing, it doesn't need to be either manual or automated. In fact, your best results will come from performing both types of tests. After you review the results from your static testing the next thing you should do is run a series of automated dynamic scans. A tool like OWASP ZAP can find potentially exploitable flaws in just a few minutes. Flaws that might take you weeks to find otherwise, if you're able to find them at all. What's even better is that the scan results often include instructions on how to fix the flaws, which can be a huge time saver. With your automated scan results in hand, then you're ready to dive into manual testing. You can use the output of the automated scans to pinpoint areas of the application, separating the false positives from the actual vulnerabilities. You can also start testing business logic flaws, something automated scanners aren't capable of effectively testing, and you can chain together lower severity vulnerabilities to identify ways that an attacker might actually break your application. If you're not doing both manual testing and automated testing, you're only seeing a fraction of the larger picture. George Box, the British statistician, hit the nail on the head when he said "All models are wrong. Some are useful." While he originally made this observation in his professional field, it holds true when discussing application security testing techniques. There's not a perfect model for you to follow when it comes to finding the right balance between automated and manual testing, or between static and dynamic testing. The right mix is going to depend on your organization's level of security maturity. You're not alone in your struggle to balance limited resources in competing priorities with your application security testing efforts. Fortunately, you've got a pair of maturity models available to you to help you manage those priorities effectively. The software assurance maturity model from OWASP ties security practices to five key business functions: governance, design, implementation, verification, and operations. You select the level that you think you're capable of given the resources that you're working with and the SAMM will provide you with insights regarding the activities you should be focusing on. Guidance regarding security testing can be found under verification. The building security in maturity model from Synopsys is similar to the SAMM, although it's organized differently. The BSIMM is grouped into four domains: governance, intelligence, SDLC touchpoints, and deployment. Each domain contains multiple practices, and it's in these practice details that you'll find guidance on how to improve your overall application security maturity. In this resource security testing is part of the SDLC touchpoints domain. The SAMM and the BSIMM include guidance beyond just QA testing, but you've got to start somewhere, right? OWASP even maintains a BSIMM to SAMM mapping just in case you want the best of both worlds. Remember, any testing you do is better than doing nothing at all. If you want your application security testing program to be the best it can be, you start by putting something in place and improving over time. Don't be afraid to prototype or iterate. Pick what works for you and build on it. Take the stuff that doesn't work and throw it away. Over time, you'll find the right balance of automated and manual testing activities to help you accomplish the real goal, securing your application.

Contents