-
-
Notifications
You must be signed in to change notification settings - Fork 0
Clone and adapt pentagi #186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
DevOpsMadDog
wants to merge
33
commits into
main-clone
from
cursor/clone-and-adapt-pentagi-gemini-3-pro-preview-961e
Closed
Changes from all commits
Commits
Show all changes
33 commits
Select commit
Hold shift + click to select a range
1350020
Make setup wizard fully automated for docker
DevOpsMadDog 0280bf9
Merge pull request #182 from DevOpsMadDog/codex/update-setup-script-f…
DevOpsMadDog 9ed25ac
Relax networkx requirement for older Python
DevOpsMadDog 07d094c
Merge pull request #183 from DevOpsMadDog/codex/update-setup-script-f…
DevOpsMadDog cc8d082
feat: Add enterprise micro pen testing service
cursoragent 33454ca
Refactor: Fix and enhance vulnerability management docs
cursoragent 30e21df
Checkpoint before follow-up message
cursoragent 3593f2f
feat: Integrate PentAGI with FixOps for enhanced security
cursoragent 80089ce
feat: Implement advanced Pentagi integration
cursoragent 56aa530
Checkpoint before follow-up message
cursoragent 94210ad
Refactor: Improve doc validation script and add status report
cursoragent 2cb901c
feat: Complete PentAGI-FixOps integration
cursoragent b70fb7f
feat: Add signature verification and policy gate endpoints
cursoragent f054765
Merge pull request #190 from DevOpsMadDog/cursor/review-and-improve-p…
DevOpsMadDog 3cb078c
Merge pull request #193 from DevOpsMadDog/cursor/review-and-improve-p…
DevOpsMadDog 381acea
feat: Add micro penetration testing feature
cursoragent c95d79a
Merge pull request #194 from DevOpsMadDog/cursor/advance-pentagi-with…
DevOpsMadDog 614be56
Merge branch 'main' into cursor/advance-pentagi-with-ai-gemini-3-pro-…
DevOpsMadDog 513b4f3
Merge pull request #195 from DevOpsMadDog/cursor/advance-pentagi-with…
DevOpsMadDog 3115ebc
Merge pull request #196 from DevOpsMadDog/cursor/advance-pentagi-with…
DevOpsMadDog 201521c
feat: Consolidate PR #191 and #192 - Fix PR #185 issues with improved…
cursoragent a672bf9
docs: Add PR creation summary and verification steps
cursoragent a3ba246
fix: Format test files to pass CI pre-merge checks
cursoragent b049b77
fix: Format test files to pass CI pre-merge checks
cursoragent bd129b0
fix: Resolve merge conflicts and fix CI check failures
cursoragent a025ba9
fix: Format agent_framework.py with black
cursoragent 82d3c50
feat: Consolidate PR #191 and #192 - Fix PR #185 issues with improved…
cursoragent 62518b2
feat: Consolidate PR #191 and #192 - Fix PR #185 issues
cursoragent 1e73f45
Merge pull request #197 from DevOpsMadDog/cursor/pr193-consolidate-fixes
DevOpsMadDog 108933e
Update wiki.json
devin-ai-integration[bot] 836664b
Merge pull request #198 from DevOpsMadDog/devin/wiki-update-1765279189
DevOpsMadDog 85245e4
style: Apply black formatting and remove unused imports to fix CI qua…
devin-ai-integration[bot] 2f1aaf1
Merge origin/main into PR #186 branch
devin-ai-integration[bot] File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,303 @@ | ||
| { | ||
| "repo_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ], | ||
| "pages": [ | ||
| { | ||
| "title": "Overview:", | ||
| "purpose": "Introduce the FixOps platform, its purpose as a DevSecOps Decision & Verification Engine, and high-level architecture overview", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Key Concepts", | ||
| "purpose": "Define core terminology: CVE, KEV, EPSS, SARIF, SBOM, VEX, SSVC, Multi-LLM Consensus, and FixOps-specific concepts", | ||
| "parent": "Overview", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "System Architecture", | ||
| "purpose": "Present the overall system architecture with major components and their interactions, including data flow diagrams", | ||
| "parent": "Overview", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Quickstart and Demo", | ||
| "purpose": "Guide users through initial setup using setup-wizard.sh and running demo mode", | ||
| "parent": "Overview", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Vulnerability Intelligence System", | ||
| "purpose": "Document the system for ingesting, processing, and enriching vulnerability data from external feeds", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "KEV and EPSS Feeds", | ||
| "purpose": "Explain how CISA KEV catalog and FIRST EPSS scores are fetched, cached, and integrated into the system", | ||
| "parent": "Vulnerability Intelligence System", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Threat Intelligence Orchestration", | ||
| "purpose": "Document the ThreatIntelligenceOrchestrator and integration with multiple vulnerability feeds (NVD, OSV, GitHub, ExploitDB, ecosystem feeds)", | ||
| "parent": "Vulnerability Intelligence System", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Severity Promotion Engine", | ||
| "purpose": "Explain how CVE severities are dynamically escalated based on KEV listings and high EPSS scores", | ||
| "parent": "Vulnerability Intelligence System", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Exploit Signal Detection", | ||
| "purpose": "Detail the ExploitSignalEvaluator and how exploit signals (KEV, EPSS, ExploitDB) are processed", | ||
| "parent": "Vulnerability Intelligence System", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Data Ingestion Layer", | ||
| "purpose": "Describe the FastAPI application that receives security artifacts and normalizes them for processing", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "FastAPI Application Structure", | ||
| "purpose": "Document the create_app factory pattern, router organization, middleware stack, and state management", | ||
| "parent": "Data Ingestion Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Upload Endpoints", | ||
| "purpose": "Detail the /inputs/* endpoints for uploading design, SBOM, SARIF, CVE, VEX, and CNAPP data", | ||
| "parent": "Data Ingestion Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Chunked Upload System", | ||
| "purpose": "Explain the ChunkUploadManager for handling large file uploads with resumability", | ||
| "parent": "Data Ingestion Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Input Normalization", | ||
| "purpose": "Document the InputNormalizer class and parsers for SBOM (CycloneDX, SPDX, Syft), SARIF, CVE feeds, VEX, CNAPP, and business context", | ||
| "parent": "Data Ingestion Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Crosswalk Correlation Engine", | ||
| "purpose": "Explain how design context, SBOM components, SARIF findings, and CVE records are correlated into unified crosswalk entries", | ||
| "parent": "Data Ingestion Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Decision Engine", | ||
| "purpose": "Document the core decision-making system that produces Allow/Review/Block verdicts with confidence scores", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Multi-LLM Consensus Engine", | ||
| "purpose": "Explain how multiple LLM providers (OpenAI, Anthropic, Gemini, Sentinel) are queried and weighted consensus is achieved", | ||
| "parent": "Decision Engine", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Decision Policy Engine", | ||
| "purpose": "Detail the policy override rules that can block or escalate decisions based on critical vulnerability combinations", | ||
| "parent": "Decision Engine", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Risk-Based Profiling", | ||
| "purpose": "Explain risk score computation using EPSS, Bayesian priors, Markov projections, and exposure multipliers", | ||
| "parent": "Decision Engine", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Enhanced Decision Service", | ||
| "purpose": "Document the enhanced decision API endpoints that provide LLM analysis, consensus results, and MITRE TTP mappings", | ||
| "parent": "Decision Engine", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Deterministic Fallback Mode", | ||
| "purpose": "Explain how the system operates without LLM providers using risk-based heuristics", | ||
| "parent": "Decision Engine", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Processing Layer", | ||
| "purpose": "Document the advanced analytics engine that applies probabilistic models and graph analysis to vulnerability data", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Bayesian and Markov Models", | ||
| "purpose": "Explain the Bayesian network inference and Markov chain state projection implementations", | ||
| "parent": "Processing Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "BN-LR Hybrid Risk Model", | ||
| "purpose": "Detail the Bayesian Network + Logistic Regression hybrid model for exploitation probability prediction", | ||
| "parent": "Processing Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Processing Layer Internals", | ||
| "purpose": "Document the ProcessingLayer.evaluate method and integration with 166 vulnerability data sources", | ||
| "parent": "Processing Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Knowledge Graph Construction", | ||
| "purpose": "Explain how components, vulnerabilities, and dependencies are modeled as a graph using NetworkX", | ||
| "parent": "Processing Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Probabilistic Forecasting", | ||
| "purpose": "Document the probabilistic forecasting models and confidence metrics for risk predictions", | ||
| "parent": "Processing Layer", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Pipeline Orchestration", | ||
| "purpose": "Document the PipelineOrchestrator that coordinates all processing stages from input to output", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Overlay Configuration System", | ||
| "purpose": "Explain the fixops.overlay.yml structure, profiles (demo/enterprise), and configuration hierarchies", | ||
| "parent": "Pipeline Orchestration", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "title": "Pipeline Orchestrator", | ||
| "purpose": "Detail the PipelineOrchestrator.run method and the sequential processing stages", | ||
| "parent": "Pipeline Orchestration", | ||
| "page_notes": [ | ||
| { | ||
| "content": "" | ||
| } | ||
| ] | ||
| } | ||
| ] | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -102,3 +102,4 @@ coverage.xml | |
| data/data/ | ||
| real_cve_*.json | ||
| terraform.tfvars | ||
| .coverage | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Parent reference mismatch: child pages reference
"parent": "Overview"but the actual page title is"Overview:"(with colon). This inconsistency may break the wiki hierarchy if the system uses exact string matching.Prompt for AI agents