Hello! My name is David Charlton and I just joined the Synack team as the Director of Product Strategy. As I launch into the next point of my career here, I wanted to first share my perspectives from the industry as I’ve experienced firsthand and share some reasons about why I’m really excited about what Synack is doing.
After four great years working in financial services, I’ve decided it’s time for a change and for a new personal challenge. I’ve worked in the penetration testing industry for the best part of 15 years, both as a provider of penetration testing professional services to large enterprises and also in the end user environment where I was accountable for security assurance through red teaming and penetration testing. Over this period of time, I’ve been fortunate to witness first-hand changes and trends in this industry; the benefits, the challenges, and how technology innovation in one quarter drives change in another. As my time at Synack begins, I’d like to share why I have chosen to join this company and why I believe crowdsourced penetration testing delivered through the model that is uniquely provided by Synack is set to trigger a significant shake-up in the industry.
First, and before I dive in, let me provide a bit of context. There is a technology transformation underway in large financial services; most firms are now setting a strategy for large scale adoption of public cloud services to provide cost saving and increased agility. Their development practices are evolving with Agile and DevOps methodologies and there is the expectation to push new features into production with ever-increasing frequency. This change is, in turn, challenging security departments to keep up: to automate more, to streamline processes to be faster and more efficient, and to enable the transformation.
I may be biased given my background in penetration testing that I’ve already mentioned, but I love penetration testing. I love the way it cuts through all the BS. It doesn’t rely on a paper-based view of risks, threats or controls; nor the assumptions or expectations of builders, buyers or sellers of these systems and applications. A pen test demonstrates with empirical evidence how robust the security posture of a system, application or infrastructure is. It is a highly effective, proven method for determining if an asset has security weaknesses.
It’s probably worth providing a definition of penetration testing, due to the sometimes confusing use of terminology. When I say penetration testing I am talking about pitting the expertise and ingenuity of a skilled human assessor against a target. Some tooling is involved to automate certain tasks, but the value and benefit of the testing comes from the assessor’s understanding of how the target is expected to behave, determining specific misuse cases and executing the testing accordingly. The primary value is linked to the manual aspect of the security analysis.
Companies are increasingly relying on technology to support their business goals and ever-increasing attack surfaces combined with more sophisticated and more frequent cybersecurity threats have led security leaders to pursue automated solutions that allow them to scale cost effectively. To date, no entirely automated vulnerability scanning method has been shown to be as effective or thorough as an expert security analysis when considering complex target systems. However that has not stopped vendors declaring “the death of penetration testing” or at least its commoditization due to claims that entirely automated assessment tools are a valid replacement. This includes network vulnerability scanning products, automated web application scanning tools, and static (code) scanning tools to name a few.
Now don’t get me wrong, all of these tools can provide a lot of value; they solve for part of the puzzle of helping security keep up with development. These tools can operate at scale, with a much lower cost (considering number of assessments and number of applications, hosts etc), whilst providing normalized, prioritized outputs with enterprise reporting.
Further, some of the major benefits of these come from being able to deploy them early in the SDLC or within the deployment toolchain. I fully support the view that we should find and fix security bugs earlier in the life cycle, and we should automate wherever practical. However, in my experience, I’ve seen penetration testing consistently surface security flaws at the end of SDLC that upstream security tools have failed, for whatever reason, to catch. Penetration testing is still an essential safety net that prevents critical security flaws from being exposed through applications and systems, potentially having the consequence of significant financial, regulatory and reputational damage.
Now with that said, I should acknowledge the challenges that traditional penetration testing introduces, because the process can be difficult for security executives, security operations teams and application and system owners subject to assessment.
First, the inherently manual nature of pen testing means it’s not easy to scale. Getting access to a sufficient pool of expert human resources is a huge challenge. On top of that, it takes a lot of time and effort to engage and schedule these human resources. It’s typical for the planning and resource allocation phase to take weeks, sometimes leading to deployment delays or escalations as well as cancellations when application owners aren’t ready in time.
Traditional penetration testing is also expensive relative to entirely automated tools and processes. The costs involved aren’t only the T&M day rates that are charged by consulting firms, but can also include cancellation charges for last minute changes, the cost of test workstations and servers, specialized software for scanning and assessment, remote access infrastructure, the vetting of staff, and the management of all of the above through the firm’s service delivery personnel.
Then, lastly, there are questions around how you ensure the quality of the technical assessment work being delivered. It’s difficult to obtain adequate coverage during an assessment whilst driving testers to find the most high-impact vulnerabilities. Additionally, how do you know what methods are being followed by testers and how much useful work is actually being performed? Finally, do you know what testers are doing on the systems you give them access to and can you trust them? These are all difficult to manage and guarantee when you utilize traditional pen testing services.
The points that I’ve shared bring me back to my motivation for joining Synack. I’m excited about this company, because Synack offers the potential to overcome the inherent limitations of the traditional pen test whilst maintaining the core value proposition pen testing provides. It addresses scheduling, and resourcing and scalability issues by leveraging the researcher crowd sourcing concept and a SaaS platform; it addresses operational security concerns by vetting all researchers more rigorously than most internal teams or consultancies and by monitoring and recording all tester network traffic to targets; and it improves upon traditional penetration testing quality by utilising an incentive-driven approach to get researcher focus on high impact vulnerabilities from a highly diverse resource pool whilst implementing clearly defined baseline checks based upon industry standard checklists.
However, I’m most excited about Synack’s potential to solve for the DevOps integrated penetration testing use case that is now a reality at many of the world’s large financial institutions. Just as the adoption of public cloud computing has taken time for FIs and regulators to become comfortable with, I see crowdsourced penetration testing delivered through the rigorous security model Synack has developed, taking the same route and providing significant benefits to their clients.