scroll it
synack-bug-bounty-blog

What’s Wrong with Bug Bounty Programs?

0% read

What Is a Bug Bounty Program? 

The concept of bug bounty programs is simple. You allow a group of security researchers, also known as ethical hackers, to access your systems and applications so they can probe for security vulnerabilities – bugs in your code. And you pay them a bounty on the bugs they find. The more bugs the researcher finds, the more money he makes.

Assessing the value or success of bug bounty programs can be difficult. There is no one methodology or approach to implementing and managing a bug bounty program. For example, a program could employ a couple of hackers or several hundred. It could be run internally or with a bug bounty partner. How much does the customer pay for the program and what reward should the hacker get?

While many organizations have jumped on the bug bounty bandwagon over the last decade or so, the results have been disappointing for some. Many companies disappointed by their bug bounty experience have talked with Synack. We can group their experiences into three major categories: researcher vetting and standards, quality of results, and program control and management.

Researcher Vetting and Standards

When you implement a bug bounty program you are relying on ethical hackers, security researchers that have the skills and expertise to break into your system and root around for security vulnerabilities. Someone has to vet those hackers to ensure that they can do the job, that they have the level and diversity of experience required to provide a thorough vulnerability assessment. And how do you know that someone signing up for the program has the right skills and is trustworthy? There are no standards to go by. Some bug bounty programs are open to just about anyone.

Quality of Results

Bug bounty programs are notorious for producing quantity over quality. After all, more bugs found means more rewards. So security managers often find themselves wading through piles of low-quality and low-severity vulnerabilities that divert their attention and resources from serious, exploitable vulnerabilities.  For example, an organization with internal service-level agreements (SLAs) for remediation of vulnerabilities may be forced to spend time on low-priority patching, just to have good metrics. This isn’t always the best path to minimize risk in the organization.

Results can also be highly dependent on the group assigned to do the hacking. Small groups – we have seen some programs that have only a handful of researchers – suffer from a lack of diversity and vision. Large groups usually cast a wider net but are more difficult to manage and control. And how much your researchers get paid is an important consideration. For example, if a company pays “average” compared to other targets on a bug bounty platform, they will not get the attention of above-average researchers. Published reports from bug bounty companies state that only 6-20% of found vulnerabilities have a CVSS (Common Vulnerability Scoring System) of 7.0 or greater, which would be below the typical customer experience seen at Synack.

Program Control and Management

By far the biggest drawback to bug bounty programs is the lack of program control and management. Turning a team of hackers loose to find security bugs is only the first step. Did they demonstrably put in effort in the form of hours or broad coverage?  What happens after bugs are found? How are the results reported? Who follows up with triaging or remediation? Who verifies resolution? 

The short answer is… it depends. Every program has its own processes and procedures. The longer answer is that most bug bounty programs don’t put a lot of effort into this area. Hackers are left to go off on their own with little monitoring. They don’t see analytics that help them efficiently choose where to hack. Internal security teams may need to wade through the resulting reports, triage the found bugs, resolve or remediate the bug condition, and verify that bugs have been appropriately addressed.

An Integrated Approach to Vulnerability Testing

These are just a few of the problems associated with bug bounty programs. But even without these issues, attacking vulnerabilities with a bug bounty program is not a panacea to test your cybersecurity posture. Finding high-criticality vulnerabilities is fine, but you need to consider context when assessing vulnerabilities. You need to take an integrated approach to vulnerability testing.

Synack provides high-quality vulnerability testing through its community of 1,500+ vetted security researchers, the Synack Red Team (SRT). Not tied to a bug bounty concept, Synack manages the SRT and provides a secure platform so they can communicate and perform testing over VPN. Through the platform Synack can monitor all the researcher traffic directly, to analyze, log, throttle or halt it.

Synack researchers are all highly skilled and bug reports typically have signal-to-noise ratios approaching 100%.  High and critical vulnerabilities making up approximately 40% or more of reports is typical. Beyond simply finding bugs, researchers consider context and exploitability and recommend remediation steps. They can retest to confirm resolution or help customers find a more airtight patch.

Furthermore, unlike bug bounty programs, Synack tests are sold to organizations with a “flat–fee” model which means that researchers will be paid based on their vulnerability findings, while the cost to you remains fixed. This takes away the burden of having to set aside a large amount of budget for vulnerability payouts which can get costly if you use a bug bounty model.  

So when you consider your next offensive security testing program, know what you’re getting with bug bounty programs. Think comprehensive pentesting with a company that can help you locate vulnerabilities that matter and address them, now and in the future.