Skip to content

Verify and expand tracing instrumentation to cover all LLM backends consistently #477

@psschwei

Description

@psschwei

Description:
Verify and expand tracing instrumentation to cover all LLM backends consistently.

Detailed Requirements:

  1. Audit current backend instrumentation:
    • OpenAI: Well instrumented (reference implementation)
    • Ollama: Verify/add instrumentation
    • HuggingFace: Verify/add instrumentation
    • WatsonX: Verify/add instrumentation
    • LiteLLM: Verify/add instrumentation
    • vLLM: Verify/add instrumentation
  2. Ensure each backend has:
    • instrument_generate_from_context() for chat
    • instrument_generate_from_raw() for completion
    • Token usage recording
    • Error handling with semantic types
  3. Add backend-specific attributes where relevant
  4. Create instrumentation checklist/tests

Files to Modify:

  • mellea/backends/ollama.py
  • mellea/backends/huggingface.py
  • mellea/backends/watsonx.py
  • mellea/backends/litellm.py
  • mellea/backends/vllm.py

Backend Instrumentation Checklist:

Backend generate_from_context generate_from_raw Token Recording Error Handling
OpenAI Yes Yes Yes Yes
Ollama ? ? ? ?
HuggingFace ? ? Limited ?
WatsonX ? ? Yes ?
LiteLLM ? ? Yes ?
vLLM ? ? ? ?

Acceptance Criteria:

  • All backends have consistent instrumentation
  • Token recording works for all backends that support it
  • Errors handled consistently across backends
  • Instrumentation patterns documented

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions