Reviewing Findings
After scanning your MCP server, you'll receive a list of security findings. This guide helps you understand, prioritize, and fix them effectively.
The Findings List
Your scan results include a grouped list of findings, organized by severity:
- Critical - Shown first, most urgent
- High - Second priority
- Medium - Plan to address
- Low - Good to fix when possible
- Info - Review and decide
Summary Badges
At the top of the findings list, you'll see badges showing:
- Total count per severity level
- Quick visual of severity distribution
- Click a badge to filter to that severity
Anatomy of a Finding
Each finding card contains:
1. Severity Indicator
A colored left border indicates severity:
- 🔴 Critical (red)
- 🟠 High (orange)
- 🟡 Medium (yellow)
- 🔵 Low (blue)
- ⚪ Info (gray)
2. Check Name
The type of vulnerability detected, such as:
- "eval() Usage"
- "Hardcoded API Key"
- "Missing Input Validation"
3. File Location
Where the issue was found:
src/tools/execute.ts:42Format: file_path:line_number
4. Description
What was detected and why it's a concern:
"Direct use of eval() can execute arbitrary code and is a critical security risk."
5. Code Snippet
The actual code that triggered the finding:
const result = eval(userInput);This shows approximately 100 characters around the issue.
6. Remediation Guidance
How to fix the issue:
"Replace eval() with a safer alternative like JSON.parse() for data parsing, or use a sandboxed environment for code execution."
Prioritizing Findings
Not all findings need immediate attention. Here's how to prioritize:
Fix Immediately
Critical severity findings:
- Remote code execution risks
- Exposed credentials or secrets
- Known CVEs with exploits
These are actively exploitable and should be fixed before any deployment.
Fix Before Production
High severity findings:
- Path traversal vulnerabilities
- Disabled security features (TLS)
- SQL/Command injection risks
These represent significant risk and need to be addressed before going live.
Plan to Fix
Medium severity findings:
- Missing input validation
- Excessive dependencies
- Unbounded operations
Schedule these in your backlog and fix in upcoming sprints.
Fix When Convenient
Low and Info findings:
- Missing tool descriptions
- Code style concerns
- Hardcoded URLs (intentional)
Address during code cleanup or refactoring sessions.
Common Findings and Fixes
eval() Usage (Critical)
Problem:
const result = eval(userInput);Fix:
// For JSON parsing
const result = JSON.parse(userInput);
// For math expressions, use a safe parser
import { evaluate } from 'mathjs';
const result = evaluate(expression);Hardcoded API Key (Critical)
Problem:
const apiKey = "sk-ant-abc123...";Fix:
const apiKey = process.env.ANTHROPIC_API_KEY;And add to .env:
ANTHROPIC_API_KEY=sk-ant-abc123...
child_process Usage (Critical)
Problem:
exec(`ls ${userDir}`, callback);Fix:
// Use spawn with array arguments (no shell)
import { spawn } from 'child_process';
const ls = spawn('ls', [sanitizedDir]);
// Or validate/sanitize input
import path from 'path';
const safeDir = path.resolve('/allowed/base', userDir);Path Traversal (High)
Problem:
const content = fs.readFileSync(userPath);Fix:
import path from 'path';
const basePath = '/allowed/directory';
const safePath = path.resolve(basePath, userPath);
// Ensure path is within allowed directory
if (!safePath.startsWith(basePath)) {
throw new Error('Invalid path');
}
const content = fs.readFileSync(safePath);Missing Input Validation (Medium)
Problem:
server.tool('process', async ({ input }) => {
// Directly using input without validation
return processData(input);
});Fix:
import { z } from 'zod';
const inputSchema = z.object({
data: z.string().max(1000),
type: z.enum(['json', 'text']),
});
server.tool('process', async ({ input }) => {
const validated = inputSchema.parse(input);
return processData(validated);
});Unbounded Fetch (Medium)
Problem:
const response = await fetch(url);Fix:
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10000);
try {
const response = await fetch(url, { signal: controller.signal });
// ... handle response
} finally {
clearTimeout(timeout);
}False Positives
Sometimes findings are intentional or safe in context. Common cases:
Test Files
Code in test files might use patterns like eval() intentionally. Consider:
- Excluding test directories from scans
- Documenting why the code is necessary
Example/Documentation Code
Sample code in documentation may show vulnerable patterns for educational purposes:
- Add comments explaining it's an example
- Keep examples separate from production code
Safe Wrappers
Your code might use a dangerous function but with proper safeguards:
- Document the safety measures
- Consider if the pattern can be detected as safe
Exporting Findings
PDF Report
Click Download PDF to get a report with:
- Executive summary
- Score breakdown
- All findings with remediation
- AI analysis (if available)
Use this for:
- Security reviews
- Compliance documentation
- Stakeholder presentations
Re-scanning
After fixing issues:
- Commit your changes
- Click Rescan to bypass cache
- Compare new findings to original
- Verify fixes are working
Tracking Progress
To track security improvements over time:
- Initial Scan - Establish baseline
- Fix Critical/High - Address urgent issues
- Rescan - Verify fixes
- Fix Medium - Continue improvement
- Set up CI/CD - Prevent regression
See API Reference for automating scans in your pipeline.
Next Steps
- Security Checks - Detailed check documentation
- Understanding Scores - Score calculation
- API Reference - Automate security scanning
- CI/CD Integration - Add to your pipeline