Skip to content
This repository was archived by the owner on Aug 20, 2025. It is now read-only.
This repository was archived by the owner on Aug 20, 2025. It is now read-only.

LLMEvaluator that evaluates model's output with LLM #268

@deep-diver

Description

@deep-diver

This is a custom TFX component project idea.
hope to get some feedbacks from (@rcrowe-google , @hanneshapke , @sayakpaul , @casassg)

Temporary Name of the component: LLMEvaluator

Behaviour
: LLMEvaluator evaluates trained model's performance via designated LLM service (i.e. PaLM, Gemini, ChatGPT, ...) by comparing the outputs of the model and the labels provided from ExampleGen.
: LLMEvaluator takes a parameter instruction which let you specify the prompt to the model. Since each LLM service could not interpret the same prompt in the same way, and it should be differentiated from task to task.

Why
: It is common sense to leverage LLM service to evaluate the model these days (especially when we fine-tune one of the open source LLM such as LLaMA).

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions