Generally speaking, if you ask an average employee what they know from the world of cyber, security, or IT, chances are that VPN, firewall, hackers, DDOS, or pentesting is mentioned. Although tedious, unrelatable, and unskippable training sessions might be on this list too. Anyways, even though there is no dictionary entry for penetration testing, most people would define it as a combination of the following activities:
- Scoping an environment
- Defining unwanted outcomes
- Discovering vulnerabilities
- Simulating an attack
- Demonstrate the impact
Having the latter makes it a goal-oriented test," What if?" instead of a security test or security scan where the idea is to find and identify vulnerabilities and weaknesses, "What are the ways?". To demonstrate an attack, most pentests combine or chain several vulnerabilities to escalate or pivot through an application or infrastructure. By combining the various focus areas (mistakes in configuration, IAM, code, design, etc.) and spreading out over the implementation (software, middleware, infrastructure, etc.), what pentesting really shows is the effectiveness of the security measures in your development process. The list of used attack vectors and vulnerabilities is just a side effect of doing this.
If you have an application with 20 pages and SQL injection was found on 4 of them, then SQL injection isn't your problem. The problem is why there is a difference in the implementation of solutions. Is there a gap in security knowledge between developers? Aren't there any instructions on the wiki? Are some ORM-building blocks missing? Is there no 4-eyes principle, etc.? If you manage to find and fix that, you'll end up fixing all issues that spawn from the same core problem.
At the same time, it should be obvious that if you never fix the problem itself, just the incidents, you'll have new incidents. It is a self-sustaining market. It's like going to the dentist to find out you have a cavity in your teeth, but after filling it, change nothing in either your brushing technique or general dental hygiene.
We need to change our idea of why we use these tests. And while we are at it, let's see if we can improve it. Now any assessment or test for gaining some assurance in the security of an IT system will have time-consuming activities to transfer knowledge between the parties involved. In the beginning, the pentesters need to construct a model of the intended functionality, the actual implementation, and the configuration. And in the end, the development team needs to understand what the issues are and how to resolve them but, more importantly, why they were there in the first place to prevent future incidents.
This waste can be reduced, and the quality of the outcome can be raised by pairing the penetration testers with the DevOps team members during the penetration test. This means that during the test, the penetration testers will engage the system together with members of the development team combined as driver and navigator. At the same time, the DevOps engineers are available to explain the way of working, issues, and behavior. And this means that the time spent researching isn't spent on actual testing. So try to minimize that effort but maximize the outcome.
A friend of mine always told this anecdote to describe the difference between a black box test and a white box test. In the first, you are asked how many windows a building has, but you need to figure this out in the dark, behind a gate, and throw rocks. This has some drawbacks, of course. As you are blindly targeting, you might hit something else or break the window. Whereas the second scenario, you are asked the same objective, but now it is a clear day. You have the keys to the building to walk anywhere and have the blueprint in your hand. Now you are able to give a complete overview of where you have been and tried, what should be there and is actually there, and in the end, you can give an exact number of windows. With a back-to-back test, we take this advantage to a whole new level. While walking through the building, you are accompanied by the owner, architect, carpenter, etc. And you can challenge them to explain why a certain glass was used or the reason for a certain layout.
Besides saving time in understanding or figuring out, the mindset and approach from the pentester can be adopted by the team or their means (like tools and techniques). And any findings can be explained on the spot with a real-life and for the team relatable system, namely their own system. Also, the findings can be reported in a format that is familiar to the DevOps team, for example, in a Jira ticketing system or put possible efforts on the team's backlog. This saves time writing the report, which can be a 'management summery’-level. This means that there is more effective testing during the attack window. As well as having the findings up-to-date instead of in one go.
And for each remediation, the pentesters can coach the DevOps engineers on how to identify them proactively, test to reproduce certain outcomes, and how to deal with those findings within their Agile workflow. Giving them the capability to automate some of the actions the pentester did. Both the penetration testers and the DevOps engineers in their collaboration to deal with this.
In short, we can improve our pentesting by:
- Fix the problem, not the issue
- Collaborate in the effort, not just support
- Adopt any lesson learned in the way of working