Skip to content

feat(telemetry): emit llm.prompt_template.* from tracing plugin (post-#1045) #1067

@planetf1

Description

@planetf1

Problem

llm.prompt_template.template and llm.prompt_template.variables (OpenInference convention) are not emitted on backend spans today.

A component-level approach was prototyped in PR #1036 but withdrawn following review from @jakelorocco: capturing template state on Instruction and GenerativeStub is the wrong layer — it only covers two component types, captures pre-substitution values, and puts telemetry concerns inside domain objects.

The correct extraction point is the formatter render path (TemplateFormatter._render_representation), which is reached for every component type and holds the actual template and variables used at generation time.

Approach

After #1045 lands and BackendTracingPlugin exists:

  1. Capture (template_str, variables_dict) at the point _render_representation renders an action — one place, covers all components.
  2. In BackendTracingPlugin, emit:
    • llm.prompt_template.template — always (template text is not user content)
    • llm.prompt_template.variables — only when is_content_tracing_enabled() returns true (variables may contain user data)
  3. is_content_tracing_enabled() and the MELLEA_TRACE_CONTENT / OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT flag are already wired in mellea/telemetry/tracing.py from feat(telemetry): close five OTel GenAI semantic convention emission gaps (#1035) #1036.

Prerequisites

Related

Epic #444 Phase 2 — alongside the existing #1035 entry.
PR #1036 (where the component-level prototype was withdrawn).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions