Skip to content

Add extra_headers support to OpenAIModelConfiguration#45687

Draft
jingjingjia-ms wants to merge 2 commits intoAzure:mainfrom
jingjingjia-ms:feature/eval-openai-default-headers
Draft

Add extra_headers support to OpenAIModelConfiguration#45687
jingjingjia-ms wants to merge 2 commits intoAzure:mainfrom
jingjingjia-ms:feature/eval-openai-default-headers

Conversation

@jingjingjia-ms
Copy link

@jingjingjia-ms jingjingjia-ms commented Mar 13, 2026

Summary

Adds an extra_headers field to OpenAIModelConfiguration TypedDict, enabling prompty-based evaluators to send custom HTTP headers on every LLM call. This unblocks integration with LLM endpoints that require custom headers (e.g., M365 LLM API requires X-ModelType).

Changes

SDK (2 lines):

  • _model_configurations.py: Add extra_headers: NotRequired[Dict[str, str]] to OpenAIModelConfiguration
  • _common/utils.py: Merge extra_headers from model config into per-call extra_headers in construct_prompty_model_config()

Samples:

  • test_m365_llmapi_e2e.py: E2E test demonstrating RelevanceEvaluator calling M365 LLM API
  • m365-llm-api-investigation.md: Investigation doc for M365 LLM API integration (M365-Copilot-Agent-Evals#138)

How it works

Customer headers flow through the OpenAI SDK's extra_headers parameter on chat.completions.create(), which merges after default headers - allowing override of any header including Authorization.

Validated

E2E test confirmed: RelevanceEvaluator -> M365 LLM API -> score 4.0/5

…I integration

Add extra_headers field to OpenAIModelConfiguration TypedDict and merge it
into per-call extra_headers in construct_prompty_model_config. This enables
prompty-based evaluators to send custom HTTP headers (e.g., X-ModelType)
required by the M365 LLM API.

Includes an E2E test script and investigation doc for calling M365 LLM API
from the Azure AI Evaluation SDK.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@github-actions github-actions bot added Community Contribution Community members are working on the issue customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation labels Mar 13, 2026
@github-actions
Copy link

Thank you for your contribution @jingjingjia-ms! We will review the pull request and get back to you soon.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Community Contribution Community members are working on the issue customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant