Findings API
Retrieve and manage findings from your scans. Findings are the individual issues discovered by our 150 analysis modules.
π― Not a Developer? Start Here
You don't need to write code yourself. Copy the prompts below and paste them into Claude, ChatGPT, Cursor, or any AI coding assistant. Your AI will read the docs and build what you need.
π‘ Why This Matters
Findings are the individual vulnerabilities and issues in your code. This API lets you build custom workflows around themβautomatically create tickets, block deployments, or track resolution over time.
- βAuto-create tickets: When a critical finding is detected, automatically create a GitHub issue or Jira ticket
- βBlock risky deploys: Check for critical findings before deploy and fail the pipeline if any exist
- βTrack fix velocity: Monitor how quickly your team resolves findings over time
Quick Start Prompts
Common tasks for working with security findings.
π« Auto-Create GitHub Issues for Critical Findings
Automatically create GitHub issues when critical security issues are found.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Create a script that auto-creates GitHub issues for critical findings:
1. Fetch GET /api/v1/findings?scanId={scanId}&severity=critical
2. For each finding in response.findings array:
- Check if GitHub issue already exists (search by title)
- If not, create issue with:
- Title: "[Security] {finding.title}"
- Body: finding.description, finding.file, finding.line, finding.suggestion
- Labels: ["security", "critical", "bugrit"]
3. After creating, PATCH /api/v1/findings/{findingId} with status: "open"
4. Log summary: "Created X issues"
Use GITHUB_TOKEN and BUGRIT_API_KEY from environment.
My stack: [YOUR_STACK]π©βπ» Technical Details (for developers)
Findings Response Structure
{
"findings": [
{
"id": "fnd-001",
"scanId": "scn-xyz789",
"title": "SQL Injection vulnerability",
"description": "User input passed directly to SQL query",
"severity": "critical",
"category": "security",
"tool": "semgrep",
"file": "src/api/users.ts",
"line": 45,
"code": "db.query(`SELECT * FROM users WHERE id = ${userId}`)",
"suggestion": "Use parameterized queries",
"cwe": "CWE-89"
}
],
"pagination": { "total": 2, "limit": 50, "offset": 0 }
}π Findings List Component with Filters
Build a filterable list of security findings for your dashboard.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Create a findings list component:
1. Accept scanId as a prop
2. Fetch GET /api/v1/findings?scanId={scanId}
3. Add filter dropdowns for:
- severity: critical, high, medium, low, info
- category: security, quality, performance, accessibility
4. Display findings as cards showing:
- Severity badge (red/orange/yellow/blue)
- finding.title and finding.file:finding.line
- Expandable section with finding.description and finding.suggestion
5. Add action buttons to update status:
- "Mark Fixed" β PATCH with status: "resolved"
- "False Positive" β PATCH with status: "false_positive"
6. Use pagination from response (total, limit, offset)
Use my existing component library.
My stack: [YOUR_STACK]π€ AI-Powered Auto-Fix
Let your AI assistant fix the vulnerabilities in your code.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Look at the findings from my Bugrit scan and fix them:
1. Fetch GET /api/v1/findings?scanId={scanId}&severity=critical
2. For each finding:
- Read the file at finding.file
- Go to finding.line and review finding.code snippet
- Understand the vulnerability from finding.description and finding.cwe
- Apply the fix from finding.suggestion
3. After fixing, PATCH /api/v1/findings/{findingId} with:
- status: "resolved"
- note: "Fixed by AI assistant"
4. Summarize what was fixed
Prioritize security issues first. Don't introduce new issues.
The scan ID is: [PASTE_SCAN_ID_HERE]π« Block Deploy if Critical Issues Exist
Add a pre-deploy check that fails if unresolved critical issues exist.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Add a pre-deploy check to my CI/CD pipeline:
1. Get the latest scan: GET /api/v1/scans?limit=1
2. Fetch critical findings: GET /api/v1/findings?scanId={scanId}&severity=critical
3. Filter out resolved/false_positive (only count status: "open")
4. If any unresolved critical findings exist:
- Print list of findings with file:line locations
- Exit with error code 1 (fail the build)
5. If all clear, continue with deploy
Add this as a script or GitHub Action step.
My stack: [YOUR_STACK]List Findings
Get All Findings for a Scan
Retrieve all security and quality findings from a scan.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Build a function to fetch and display findings:
1. Call GET /api/v1/findings?scanId={scanId}
2. Accept optional filters: severity, category, tool
3. Return the findings array with pagination info
4. Group findings by severity for display
5. Calculate totals for dashboard summary
Handle errors and empty results gracefully.
My stack: [YOUR_STACK]π©βπ» Technical Reference
/api/v1/findingsQuery Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scanId | string | Yes | Scan ID to get findings for |
severity | string | No | critical, high, medium, low, info |
category | string | No | security, quality, performance, accessibility |
tool | string | No | Filter by source tool (e.g., semgrep, eslint) |
limit | integer | No | Max results (default: 50) |
Example Response
{
"findings": [
{
"id": "fnd-001",
"scanId": "scn-xyz789",
"title": "SQL Injection vulnerability",
"description": "User input is passed directly to SQL query without sanitization",
"severity": "critical",
"category": "security",
"tool": "semgrep",
"file": "src/api/users.ts",
"line": 45,
"code": "db.query(`SELECT * FROM users WHERE id = ${userId}`)",
"suggestion": "Use parameterized queries to prevent SQL injection",
"cwe": "CWE-89",
"deduplicated": true,
"duplicateCount": 2
}
],
"pagination": {
"total": 2,
"limit": 50,
"offset": 0
}
}Get Finding Details
/api/v1/findings/:findingIdGet detailed information about a specific finding, including AI-generated explanation and remediation steps.
Response includes
- Full finding details and context
- AI-generated plain English explanation
- Step-by-step remediation guidance
- Related findings from other tools (if deduplicated)
- Code snippet with highlighted issue
Update Finding Status
Mark Finding as Resolved or False Positive
Update the status of findings as you fix them.
Read the Bugrit Findings API at https://bugrit.com/docs/api-reference/tests
Add status update buttons to my findings view:
1. "Mark Fixed" button calls:
PATCH /api/v1/findings/{findingId}
Body: { "status": "resolved", "note": "Fixed in commit abc123" }
2. "False Positive" button calls:
PATCH /api/v1/findings/{findingId}
Body: { "status": "false_positive", "note": "Test data, not real" }
3. "Accept Risk" button calls:
PATCH /api/v1/findings/{findingId}
Body: { "status": "accepted", "note": "Risk accepted by team" }
Update the UI optimistically, refresh findings list on success.
My stack: [YOUR_STACK]π©βπ» Technical Details (for developers)
/api/v1/findings/:findingIdRequest Body
| Field | Type | Description |
|---|---|---|
status | string | open, resolved, false_positive, accepted |
note | string | Optional note explaining the status change |
Example Request
curl -X PATCH https://bugrit.com/api/v1/findings/fnd-001 \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"status": "false_positive",
"note": "This is test data, not a real vulnerability"
}'Severity Levels
| Severity | Color | Description |
|---|---|---|
critical | Red | Immediate action required. Security vulnerabilities, data exposure risks. |
high | Orange | Should be addressed soon. Significant security or quality issues. |
medium | Yellow | Plan to address. Code quality, performance, or minor security issues. |
low | Blue | Nice to fix. Style issues, minor improvements. |
info | Gray | Informational. Best practice suggestions. |
Finding Categories
| Category | What Gets Checked |
|---|---|
security | SQL injection, XSS, hardcoded secrets, vulnerable dependencies |
quality | Code complexity, unused code, type safety, best practices |
performance | Page load speed, bundle size, render blocking, memory leaks |
accessibility | WCAG compliance, screen reader support, keyboard navigation |
standards | Code formatting, naming conventions, documentation |