Skip to content

feat: add comprehensive pdd setup prompts for all LLM providers (#480)#483

Draft
niti-go wants to merge 8 commits intopromptdriven:mainfrom
niti-go:change/issue-480
Draft

feat: add comprehensive pdd setup prompts for all LLM providers (#480)#483
niti-go wants to merge 8 commits intopromptdriven:mainfrom
niti-go:change/issue-480

Conversation

@niti-go
Copy link
Contributor

@niti-go niti-go commented Feb 10, 2026

Summary

This PR adds 7 new prompt files to transform pdd setup into a comprehensive system bootstrapper that supports dynamic provider discovery, smart API key management, local LLM configuration, and interactive model selection with cost transparency across all LLM providers.

Closes #480

Changes Made

New Prompts Created

  • pdd/prompts/setup_tool_python.prompt - Main orchestrator for the enhanced setup flow with menu-driven configuration
  • pdd/prompts/api_key_scanner_python.prompt - Dynamic provider discovery from llm_model.csv, scanning all sources (.env, shell, api-env files)
  • pdd/prompts/api_key_validator_python.prompt - API key validation using llm_invoke() instead of hardcoded HTTP requests
  • pdd/prompts/provider_manager_python.prompt - Add/remove providers with atomic CSV updates and smart key storage
  • pdd/prompts/model_selector_python.prompt - Interactive model selection with cost transparency and tier guidance
  • pdd/prompts/local_llm_configurator_python.prompt - Configure Ollama, LM Studio, and custom providers
  • pdd/prompts/pddrc_initializer_python.prompt - Initialize .pddrc with sensible project defaults

Example Code Files

  • context/api_key_scanner_example.py - Example implementation showing multi-source key discovery
  • context/api_key_validator_example.py - Example showing llm_invoke-based validation
  • context/provider_manager_example.py - Example of CSV management and key storage logic
  • context/model_selector_example.py - Example of tier-based model selection UI
  • context/local_llm_configurator_example.py - Example of Ollama/LM Studio auto-detection
  • context/pddrc_initializer_example.py - Example .pddrc generation

Documentation Updated

  • README.md - Updated setup command documentation to reflect new capabilities
  • SETUP_WITH_GEMINI.md - Updated to reference new multi-provider setup flow
  • docs/ONBOARDING.md - Updated setup instructions for new user experience
  • docs/SETUP_GUIDE.md - New comprehensive setup guide documenting all menu options and workflows
  • pdd/docs/prompting_guide.md - Added guidelines for setup-related prompts

Key Features

Dynamic Provider Support

  • Reads all providers from llm_model.csv instead of hardcoded list
  • Supports Anthropic, OpenAI, Google, Fireworks, Groq, Vertex AI, and any future CSV additions
  • No code changes needed when new providers are added to CSV

Smart API Key Management

  • Scans keys from all sources: shell environment, .env files, ~/.pdd/api-env.* files
  • Shows key source transparency (e.g., "✓ Valid (shell environment)")
  • Smart storage: only saves to api-env.* if key was entered during setup (avoids duplicating keys from Infisical, .env, etc.)
  • Safe removal: comments out keys instead of deleting (preserves ability to recover)

Interactive Model Selection

  • Groups models by tier (Fast/Cheap, Balanced, Most Capable)
  • Shows cost information ($1-5M vs $15-75M per million tokens)
  • Smart defaults with opt-in for premium tiers
  • Explains --strength parameter for runtime model selection

Local LLM Support

  • Auto-detects Ollama and LM Studio installations
  • Queries Ollama API for installed models
  • Supports custom base URLs for other local providers
  • Guides through adding models without API keys

Menu-Driven Flow

What would you like to do?
  1. Add or fix API keys
  2. Add a local LLM (Ollama, LM Studio)
  3. Add a custom provider
  4. Remove a provider
  5. Continue →

Review Checklist

  • Prompt syntax is valid (all prompts follow PDD conventions)
  • PDD conventions followed (includes, requirements, examples)
  • Documentation is up to date (README, SETUP_GUIDE, ONBOARDING)
  • Example code demonstrates key functionality
  • All 7 prompts have corresponding example implementations

Next Steps After Merge

  1. Regenerate code from prompts in dependency order:

    # No sync order script needed - these are new modules
    pdd gen pdd/prompts/api_key_scanner_python.prompt
    pdd gen pdd/prompts/api_key_validator_python.prompt
    pdd gen pdd/prompts/provider_manager_python.prompt
    pdd gen pdd/prompts/model_selector_python.prompt
    pdd gen pdd/prompts/local_llm_configurator_python.prompt
    pdd gen pdd/prompts/pddrc_initializer_python.prompt
    pdd gen pdd/prompts/setup_tool_python.prompt
  2. Run tests to verify functionality:

    pytest tests/test_setup_tool.py
  3. Test the new setup flow:

    pdd setup
  4. Update llm_model.csv if needed to ensure all providers have required metadata (cost fields)

Risk Mitigation

  • Atomic CSV updates: All CSV modifications use temp files + atomic rename
  • Smart key storage: Never overwrites existing keys in environment
  • Safe deletion: Comments out keys instead of deleting permanently
  • Source transparency: Clear indication of where each key comes from
  • Backward compatibility: Existing ~/.pdd/ configurations continue working

Created by pdd change workflow (Step 13/13)

…ptdriven#480)

Create 7 new prompts to transform pdd setup into a comprehensive system bootstrapper
that supports dynamic provider discovery, API key management, and interactive model
selection across all LLM providers.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
niti-go and others added 6 commits February 10, 2026 22:50
- Delete model_selector_python.prompt and its example: interactive tier
  selection removed; adding a provider now auto-loads all its models
- Delete api_key_validator_python.prompt and its example: replaced by
  model_tester which tests individual models via litellm.completion()
  directly instead of steering llm_invoke with PDD_MODEL_DEFAULT
- Create model_tester_python.prompt and example: new "Test a model" menu
  option using litellm.completion() with direct api_key param, showing
  diagnostics, timing, and cost per call
- Create cli_detector_python.prompt and example: new "Detect CLI tools"
  menu option leveraging get_available_agents() from agentic_common,
  cross-referencing API keys with installed CLIs
- Rewrite setup_tool_python.prompt: new 6-option menu (Add provider,
  Remove models, Test model, Detect CLI, Init .pddrc, Done) with
  sub-menus, replacing the old 5-option flow that ended in model selection
- Rewrite provider_manager_python.prompt: new functions add_api_key
  (auto-loads all models), remove_models_by_provider (comments out keys),
  and remove_individual_models, replacing add_or_fix_keys/remove_provider
- Trim all prompts per prompting guide: remove implementation patterns,
  keep behavioral requirements, target 10-30% prompt-to-code ratio
- Store files in /pdd/setup, rather than main directory /pdd
- Modify utils.py command to run pdd setup at the new correct file path
- Still needs refinement on prompts + code to further improve interface,  but is functional
- Still needs example files and test files
@niti-go
Copy link
Contributor Author

niti-go commented Feb 15, 2026

I'm going to make another change on top of Issue 480's description.

The current flow is bottlenecked by data/llm_model.csv, a hand-maintained list of ~20 models across only 5 providers. Everything flows from it: the scanner only knows about LLM keys listed there, and adding an API key from the list bulk-imports all models for that provider from it. It's a tiny static window onto a massive ecosystem of LLMs.

Instead of maintaining data/llm_model.csv as the source of truth for what models exist, we can directly use LiteLLM's up-to-date directory of ~86 providers.

Here's what it will now look like when a user wants to add a provider:

Step 1: Choose provider

  Add a provider:

  Search for a provider (or press Enter to browse): anth

  Matching providers:
    1. anthropic          (29 models — claude-opus-4-5, claude-sonnet-4-5, ...)
    2. bedrock            (218 models — anthropic.claude-3-5-sonnet, ...)

  Or:
    c. Custom / Local LLM
    
  Choice: 1

Step 2: Choose models

  Models available for anthropic:

    #  Model                                  Context    $/1M in  $/1M out
    1. claude-opus-4-5-20251101               200K       $5.00    $25.00
    2. claude-sonnet-4-5-20250929             200K       $3.00    $15.00
    3. claude-haiku-4-5-20251001              200K       $1.00    $5.00
    ... (29 total)

  Select models (comma-separated, 'all', or 'recommended'): 1,2,3

Step 3: Enter API Key

  anthropic models require: ANTHROPIC_API_KEY

  ANTHROPIC_API_KEY: — Not found in environment

  Enter API key value (or press Enter if already configured elsewhere): sk-ant-...
  ✓ Saved to ~/.pdd/api-env.zsh
  ✓ Added 3 model(s) to ~/.pdd/llm_model.csv

@gltanaka gltanaka marked this pull request as ready for review February 15, 2026 21:48
@gltanaka gltanaka marked this pull request as draft February 15, 2026 21:48
- Show LiteLLM registry of providers and models when adding models rather than data/llm_model.csv
- Add search feature to search for a provider
- Ask user for API keys based on the models they want to add
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

pdd setup: Expand to all LLM providers and improve API key management

1 participant