The security research community is one of GitHub’s greatest assets. Every year, researchers from around the world help us find and fix vulnerabilities, making the platform safer for over 180 million developers. Our bug bounty program exists because we believe that collaboration with external researchers is one of the most effective ways to improve security, and we remain deeply committed to it.But like every bug bounty program, we’re adapting to a changing landscape. We want to share what we’re seeing, what we’re doing about it, and how we think about the security boundaries of a platform like GitHub.The volume problemOver the past year, submission volume across the industry has grown significantly. New tools, including AI, have lowered the barrier to entry for security research, which in many ways is a positive development. More people exploring attack surfaces means more opportunities to find real issues.However, alongside the growth in legitimate reports, we’ve seen a sharp increase in submissions that don’t demonstrate real security impact. These include reports without a proof of concept, theoretical attack scenarios that don’t hold up under scrutiny, and findings that are already covered by our published ineligible list. This isn’t unique to GitHub. Programs across the industry are grappling with the same challenge, and some have shut down entirely.We don’t want to go that direction. Instead, we want to invest in making our program better.What makes a strong submissionWe’re raising the bar on what we consider a complete submission. Going forward, reports will be evaluated more strictly against these criteria:A working proof of concept with demonstrated security impact. Show us the impact, don’t just describe it. What could an attacker actually achieve? We need a working proof of concept that demonstrates real exploitation and concrete security impact. Show us the boundary that can be crossed, not just that one theoretically exists. If your report says “this could lead to…” but doesn’t show that it does, it’s incomplete.Awareness of scope and ineligible findings. Before submitting, review our scope and ineligible findings list. Reports covering known ineligible categories (DMARC/SPF/DKIM configuration, user enumeration, missing security headers without a demonstrated attack path, and others) will be closed as Not Applicable, which may impact your HackerOne Signal and reputation.Validation before submission. No matter what tools you use (scanners, static analysis, AI assistants), you need to validate the output before submitting. A false positive that’s been manually reviewed is caught before it wastes anyone’s time. One that hasn’t is just noise.We welcome AI in security researchWe want to be explicit about this: we have no problem with researchers using AI tools. AI is a force multiplier, and we expect it to play an increasing role in security research. We use AI across our own internal security programs, and we’re seeing the best external researchers do the same. We welcome it.What we need is the same standard we’ve always expected: validation. An AI-assisted finding that’s been verified, reproduced, and submitted with a working proof of concept is a great submission. An unvalidated output submitted as-is without reproduction or demonstrated impact is not. This isn’t a new standard. It’s the same standard we apply to scanner output, static analysis, or any other tool. The human researcher is accountable for the accuracy of the submission.We’d also ask researchers to keep reports concise and structured. A strong report has three things: a short summary of the issue, clear steps to reproduce with supporting evidence (screenshots, HTTP requests, terminal output), and an impact statement explaining what an attacker can actually achieve. That’s it. Verbose reports such as multi-page theoretical narratives, restated background context, or AI-generated filler slow down triage because the actual finding gets buried. The clearer and more direct your report, the faster we can act on it.The tools don’t matter. The quality of the work does.Understanding GitHub’s security model: Shared responsibilityOne pattern we see frequently deserves its own discussion. Many reports describe scenarios where a user interacts with attacker-controlled content (a malicious repository, a crafted issue, untrusted code) and experiences an undesirable outcome. These reports are often well-written and technically accurate in their observations, but they misunderstand where the security boundary lies.We invest heavily in systems and teams dedicated to detecting and handling malicious content across the platform, from automated scanning to manual review. That said, GitHub operates on a shared responsibility model. Users are responsible for:Choosing which repositories, issues, and code they trust. GitHub hosts over 600 million repositories. Not all of them are benign. Users are expected to exercise judgment about what they interact with.Reviewing content before executing or interacting with it. This applies to code, scripts, workflows, and any other executable content.Understanding that cloning a repository means choosing to trust that code. Git hooks, build scripts, and other repository-level automation execute because the user chose to check out that repository.Configuring their own environment securely. Token management, credential storage, and local security settings are the user’s responsibility.When an “attack” requires the victim to actively seek out and engage with attacker-controlled content (cloning a malicious repo, asking an AI tool to analyze untrusted code, opening a crafted file), the security boundary is the user’s decision to trust that content. These scenarios generally don’t represent a bypass of GitHub’s security controls.Common examplesTo help researchers calibrate, here are patterns we see regularly that fall under shared responsibility:ScenarioWhy it’s shared responsibilityPrompt injection via content the user chose to feed to an AI toolThe user decided to trust that contentGit hooks or filters executing code in a repo the user checked outThis is how Git works by designMalicious content in a repository the user clonedCloning is an act of trustLLM producing unexpected output when processing untrusted inputThe user chose to provide that inputResearch in these areas is still extremely valuable. If you think you’ve found a blind spot in our defenses (a way to bypass an actual security control that affects users without requiring them to actively trust malicious content), that’s exactly what we want to hear about. Those findings are some of the most impactful submissions we receive. And if you come across content that violates our Terms of Service, please report it.What this means for researchersIf you’re already submitting quality research, thank you. Nothing changes for you except faster response times as we reduce queue noise.If you’re newer to bug bounty, welcome! Take a few minutes to read our scope, review the ineligible list, and invest in a working proof of concept before submitting. Quality submissions from new researchers are always valued and appreciated.If you’ve been prioritizing volume, we’d encourage a shift toward depth. One well-researched, validated finding is worth more than 10 speculative ones, both in bounty payout and reputation. The researchers who earn the most from our program are the ones who go deep.Changes to how we reward low-risk findingsNot every valid submission represents a meaningful security risk. Some reports identify hardening opportunities or documentation gaps that, while not exploitable, still lead to improvements we choose to make. We appreciate that work.Going forward, we’re updating how we handle these cases. Submissions that don’t demonstrate significant security impact but do result in a code or documentation fix will be recognized with GitHub swag rather than a bounty payout. This lets us acknowledge the contribution while focusing our bounty resources on the findings that have the greatest impact on platform security.We’d rather see researchers invest their time in deeper, high-impact research and be compensated accordingly than optimize for volume on low-risk findings.Looking aheadWe’re committed to making GitHub’s bug bounty program one of the best in the industry, for researchers and for the security of the platform. That means faster triage, clearer communication, and ensuring that valid findings get the attention and compensation they deserve. Raising quality standards is part of that investment.Security researchers make GitHub safer for every developer who depends on it. That work matters, and we don’t take it for granted.Happy hacking! 🚀The post Raising the bar: Quality, shared responsibility, and the future of GitHub’s bug bounty program appeared first on The GitHub Blog.