[AI Projects] Fix code-based evaluator catalog sample: add missing data_mapping#46824
Draft
slister1001 wants to merge 1 commit into
Draft
[AI Projects] Fix code-based evaluator catalog sample: add missing data_mapping#46824slister1001 wants to merge 1 commit into
slister1001 wants to merge 1 commit into
Conversation
The sample at sample_eval_catalog_code_based_evaluators.py failed with HTTP 400 'MissingRequiredDataMapping: Data mapping for required field item is missing' because the testing criterion had no data_mapping for the evaluator's required 'item' field. Also correct pass_threshold init_parameter schema from 'string' to 'number' to match the float value (0.5) actually passed at runtime. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fix code-based evaluator catalog sample
The sample
sample_eval_catalog_code_based_evaluators.pycurrently fails atclient.evals.create(...)with HTTP 400:Root cause
The evaluator's
data_schemadeclares"required": ["item"](thegrade(sample, item)function expects the whole item object), but thetesting_criteriaentry omitteddata_mappingentirely. The server validates that every field indata_schema.requiredhas a corresponding mapping in the testing criterion.For comparison, the working
sample_eval_catalog_prompt_based_evaluators.pydeclares each field individually indata_schemaand maps each one indata_mapping. The code-based sample needs to map the single requireditemfield to the whole data-source item.Fixes
data_mapping={"item": "{{item}}"}to the testing criterion — passes the entire data-source item to the evaluator'sitemparameter.pass_thresholdtype ininit_parameters.propertiesfrom"string"to"number"— the runtime value0.5is a float, not a string. After the data-mapping fix is in place, this would be the next validation failure.Sample diff
Reproduction
Without this fix, running the sample with
azure-ai-projects==2.1.0against an Azure AI Foundry project produces the 400 error shown above. With this fix, evaluation creation succeeds.Scope
Sample-only change. No SDK source code, generated code, public surface, or tests are affected.