Skip to content

MagnivOrg/prompt-layer-library

Repository files navigation

🍰 PromptLayer

Version, test, and monitor every prompt and agent with robust evals, tracing, and regression sets.

Python Docs Demo with Loom


This library provides convenient access to the PromptLayer API from applications written in python.

Installation

pip install promptlayer

Optional extras (learn more):

pip install "promptlayer[openai-agents]"
pip install "promptlayer[claude-agents]"

Quick Start

To follow along, you need a PromptLayer API key. Once logged in, go to Settings to generate a key.

Create a client and fetch a prompt template from PromptLayer:

from promptlayer import PromptLayer

pl = PromptLayer(api_key="pl_xxxxx")

prompt = pl.templates.get(
    "support-reply",
    {
        "input_variables": {
            "customer_name": "Ada",
            "question": "How do I reset my password?",
        }
    },
)

print(prompt["prompt_template"])

Async client:

import asyncio

from promptlayer import AsyncPromptLayer


async def main():
    pl = AsyncPromptLayer(api_key="pl_xxxxx")

    prompt = await pl.templates.get(
        "support-reply",
        {
            "input_variables": {
                "customer_name": "Ada",
                "question": "How do I reset my password?",
            }
        },
    )

    print(prompt["prompt_template"])


asyncio.run(main())

Every method has an async version.

You can also use the client as a proxy around supported provider SDKs:

from promptlayer import PromptLayer

pl = PromptLayer(api_key="pl_xxxxx")
openai = pl.openai

response = openai.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[{"role": "user", "content": "Say hello in one short sentence."}],
    pl_tags=["proxy-example"],
)

Configuration

Client Options

PromptLayer(...) and AsyncPromptLayer(...) accept these parameters:

  • api_key: str | None = None: Your PromptLayer API key. If omitted, the SDK looks for PROMPTLAYER_API_KEY.
  • enable_tracing: bool = False: Enables OpenTelemetry tracing export to PromptLayer.
  • base_url: str | None = None: Overrides the PromptLayer API base URL. If omitted, the SDK uses PROMPTLAYER_BASE_URL or the default API URL.
  • throw_on_error: bool = True: Controls whether SDK methods raise PromptLayer exceptions or return None for many API errors.
  • cache_ttl_seconds: int = 0: Enables in-memory prompt-template caching when greater than 0.

Environment Variables

The SDK relies on the following environment variables:

Variable Required Description
PROMPTLAYER_API_KEY Yes, unless passed as api_key= API key used to authenticate requests to PromptLayer.
PROMPTLAYER_BASE_URL No Overrides the PromptLayer API base URL. Defaults to https://api.promptlayer.com.
PROMPTLAYER_OTLP_TRACES_ENDPOINT No Overrides the OTLP trace endpoint used by the OpenAI Agents tracing integration.
PROMPTLAYER_TRACEPARENT No Optional trace context passed through the Claude Agents integration.

Client Resources

The main resources surfaced by PromptLayer and AsyncPromptLayer are:

Resource Description
client.templates Prompt template retrieval, listing, publishing, and cache invalidation.
client.run() and client.run_workflow() Helpers for running prompts and workflows.
client.log_request() Manual request logging.
client.track Request annotation utilities for metadata, prompt linkage, scores, and groups.
client.group Group creation for organizing related requests.
client.traceable() Decorator for tracing your own functions and sending those spans to PromptLayer when tracing is enabled.
client.skills Skill collection pull, create, publish, and update operations.
client.openai and client.anthropic Provider proxies that wrap those SDKs and log requests to PromptLayer.

Note: When tracing is enabled, spans are exported to PromptLayer using OpenTelemetry.

Integration Modules

Optional modules that are imported directly rather than accessed through the client:

Module Description
promptlayer.integrations.openai_agents Tracing utilities for the openai-agents SDK that instrument agent runs and export their traces to PromptLayer.
promptlayer.integrations.claude_agents Configuration utilities for the claude-agent-sdk SDK that load the PromptLayer plugin and required environment settings so Claude agent runs send traces to PromptLayer.

Error Handling

The SDK raises PromptLayerError as the base exception for SDK failures, with more specific subclasses for common API and validation cases.

Error type Description
PromptLayerValidationError Invalid input passed to the SDK before or during a request.
PromptLayerAPIConnectionError The SDK could not connect to PromptLayer.
PromptLayerAPITimeoutError A PromptLayer request or workflow run timed out.
PromptLayerAuthenticationError Authentication failed, usually because the API key is missing or invalid.
PromptLayerPermissionDeniedError The API key does not have permission for the requested operation.
PromptLayerNotFoundError The requested resource, such as a prompt or workflow, was not found.
PromptLayerBadRequestError The request was malformed or used invalid parameters.
PromptLayerConflictError The request conflicts with the current state of a resource.
PromptLayerUnprocessableEntityError The request was well-formed but semantically invalid.
PromptLayerRateLimitError PromptLayer rejected the request because of rate limiting.
PromptLayerInternalServerError PromptLayer returned a 5xx server error.
PromptLayerAPIStatusError Other non-success API responses that do not map to a more specific error type.

By default, the clients raise these exceptions. If you initialize PromptLayer or AsyncPromptLayer with throw_on_error=False, many resource methods return None instead of raising on PromptLayer API errors.

Caching

When enabled, the SDK caches fetched prompt templates in memory for faster repeat reads, locally re-renders them with new variables, and falls back to stale cache on temporary API failures.

  • Caching is disabled by default and is enabled by setting cache_ttl_seconds when creating PromptLayer or AsyncPromptLayer.
  • The cache applies to prompt templates fetched through client.templates.get(...).
  • Cached entries are stored in memory and keyed by prompt name, version, label, provider, and model.
  • Requests that include metadata_filters or model_parameter_overrides bypass the cache.
  • Templates that require server-side rendering behavior, such as placeholder messages or tool-variable expansion, are not cached for local rendering.
  • If a cached template is stale and PromptLayer returns a transient error, the SDK can serve the stale cached version as a fallback.
  • You can clear cached entries with client.invalidate(...) or client.templates.invalidate(...).

About

🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.

Topics

Resources

License

Stars

Watchers

Forks

Contributors