Description:
Verify and expand tracing instrumentation to cover all LLM backends consistently.
Detailed Requirements:
- Audit current backend instrumentation:
- OpenAI: Well instrumented (reference implementation)
- Ollama: Verify/add instrumentation
- HuggingFace: Verify/add instrumentation
- WatsonX: Verify/add instrumentation
- LiteLLM: Verify/add instrumentation
- vLLM: Verify/add instrumentation
- Ensure each backend has:
instrument_generate_from_context() for chat
instrument_generate_from_raw() for completion
- Token usage recording
- Error handling with semantic types
- Add backend-specific attributes where relevant
- Create instrumentation checklist/tests
Files to Modify:
mellea/backends/ollama.py
mellea/backends/huggingface.py
mellea/backends/watsonx.py
mellea/backends/litellm.py
mellea/backends/vllm.py
Backend Instrumentation Checklist:
| Backend |
generate_from_context |
generate_from_raw |
Token Recording |
Error Handling |
| OpenAI |
Yes |
Yes |
Yes |
Yes |
| Ollama |
? |
? |
? |
? |
| HuggingFace |
? |
? |
Limited |
? |
| WatsonX |
? |
? |
Yes |
? |
| LiteLLM |
? |
? |
Yes |
? |
| vLLM |
? |
? |
? |
? |
Acceptance Criteria:
Description:
Verify and expand tracing instrumentation to cover all LLM backends consistently.
Detailed Requirements:
instrument_generate_from_context()for chatinstrument_generate_from_raw()for completionFiles to Modify:
mellea/backends/ollama.pymellea/backends/huggingface.pymellea/backends/watsonx.pymellea/backends/litellm.pymellea/backends/vllm.pyBackend Instrumentation Checklist:
Acceptance Criteria: