Skip to content

Latest commit

 

History

History
82 lines (62 loc) · 1.68 KB

File metadata and controls

82 lines (62 loc) · 1.68 KB

PromptForge Python Backend

FastAPI service for executing prompts with various LLM providers and evaluating outputs.

Setup

  1. Create a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up environment variables:

    cp .env.example .env
    # Edit .env with your API keys and configuration
  4. Start Redis (required for Celery):

    docker-compose up -d redis
    # Or use local Redis installation
  5. Start the FastAPI server:

    uvicorn main:app --reload
  6. Start Celery worker (in a separate terminal):

    celery -A app.celery_app worker --loglevel=info

API Endpoints

  • GET /health - Health check
  • POST /api/execute - Execute a prompt (queues to Celery)
  • GET /api/task/{task_id} - Get task status
  • POST /api/evaluate - Evaluate LLM output

LLM Providers

Supported providers:

  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic (Claude)
  • Mistral AI
  • Google Gemini

Configure API keys in .env:

  • OPENAI_API_KEY
  • ANTHROPIC_API_KEY
  • MISTRAL_API_KEY
  • GOOGLE_API_KEY

Celery

Celery is used for async task processing. Tasks are queued in Redis and processed by Celery workers.

Starting Celery Worker

celery -A app.celery_app worker --loglevel=info

Monitoring Tasks

celery -A app.celery_app flower  # Web-based monitoring

Scoring Engine

The scoring engine evaluates LLM outputs using:

  • Semantic similarity (sentence transformers)
  • Quality metrics (length, structure, vocabulary)
  • Coherence analysis
  • Completeness scoring