feat(watsonx): Added support for watsonx chat#3731
feat(watsonx): Added support for watsonx chat#3731adharshctr wants to merge 8 commits intotraceloop:mainfrom
Conversation
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to cc33da2 in 10 seconds. Click for details.
- Reviewed
158lines of code in1files - Skipped
0files when reviewing. - Skipped posting
0draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
Workflow ID: wflow_BziLLRRy7RIRxcwH
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds OpenTelemetry instrumentation for Watsonx ModelInference.chat: wraps Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Wrapper as OTEL Wrapper
participant Model as Watsonx ModelInference
participant Telemetry as Spans & Metrics
App->>Wrapper: call chat(messages / prompt)
Wrapper->>Wrapper: _handle_input (extract messages, set GEN_AI_PROMPT attrs)
Wrapper->>Model: invoke chat(...) (messages / converted prompt)
Model-->>Wrapper: return chat response
Wrapper->>Wrapper: _handle_chat_response (record model, per-message content, finish reasons, usage, duration)
Wrapper->>Telemetry: update histograms/counters, emit MessageEvent/ChoiceEvent (if enabled)
Wrapper-->>App: return response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 665-677: The current flow calls _handle_input before converting a
chat "prompt" into "messages", causing prompt-based chat calls to miss
gen_ai.prompt.* attributes; modify the wrapping logic so the prompt-to-messages
normalization (the block checking to_wrap.get("method") == "chat" and converting
kwargs["prompt"] into kwargs["messages"]) runs before _handle_input is invoked
(or ensure _handle_input is called after that conversion), and keep the
generate_text_stream raw_response handling (to_wrap.get("method") ==
"generate_text_stream") intact; update references in the wrapper around to_wrap,
kwargs, and _handle_input so that prompt-based calls always have messages
present when _handle_input inspects inputs.
- Around line 568-630: _handle_chat_response currently ignores the event_logger
parameter so chat responses never emit ChoiceEvent objects; update
_handle_chat_response to, for each choice in choices, create and emit a
ChoiceEvent via event_logger (when event_logger is truthy and not legacy)
including the choice index, content/text, finish_reason/stop reason, model_id,
and token counts/usage metadata (use prompt_tokens, completion_tokens,
total_tokens and shared_attributes from _metric_shared_attributes) so behavior
matches the generate path; ensure you reference and populate the same attribute
names used elsewhere (e.g., GenAIAttributes.GEN_AI_RESPONSE_MODEL,
SpanAttributes.LLM_RESPONSE_STOP_REASON, GenAIAttributes.GEN_AI_TOKEN_TYPE) and
call the event_logger API the same way the generate flow does.
- Around line 503-517: The chat branch can call _emit_input_events with empty
args causing an args[0] access error; when detecting chat (messages variable)
ensure you pass messages as the first positional argument to _emit_input_events
(or otherwise populate args) before calling it. Concretely, update the call site
guarded by should_emit_events and event_logger so that if not args and messages
is set you call _emit_input_events((messages,), kwargs, event_logger) (or inject
messages into kwargs in the shape _emit_input_events expects), referencing the
existing symbols _emit_input_events, messages, args, kwargs, should_emit_events
and event_logger.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Outdated
Show resolved
Hide resolved
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Show resolved
Hide resolved
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Show resolved
Hide resolved
08d761d to
b75a4a6
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 591-649: The _handle_chat_response function assumes response is a
dict and inserts possibly None attributes into metrics; validate and coerce
inputs: ensure response = response or {} and type-check it (treat non-dict as
empty dict), coerce model_id = response.get("model_id") or response.get("model")
or "unknown", ensure choices = response.get("choices", []) is a list before
iterating, and normalize finish_reason = choice.get("finish_reason") or
"unknown" before using; when updating counters/histograms (response_counter,
token_histogram, duration_histogram, duration_histogram.record,
response_counter.add) only call them if the metric object exists and pass
attributes with no None values (e.g., build attributes via
{GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id,
SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason} after normalizing), and
coerce usage token fields to ints with defaults (prompt_tokens =
int(usage.get("prompt_tokens", 0)) etc.); keep using _set_span_attribute and
_metric_shared_attributes but ensure they receive sanitized values.
- Around line 503-527: The chat input handling in _handle_input only inspects
kwargs["messages"], missing positional messages in args and risking malformed
events when entire dicts are emitted as content; update the logic to also check
positional args for a messages list (e.g., detect if args contains a list/dict
representing chat messages), iterate that list to set prompt span attributes via
_set_span_attribute (using GenAIAttributes.GEN_AI_PROMPT and span) for each
message index and role, and when emitting events (emit_event with MessageEvent)
ensure you only pass string content and an explicit role by validating each
message is a mapping (dict-like), extracting msg.get("content") and
msg.get("role", "user"), and skipping or normalizing non-dict items instead of
forwarding whole objects; fall back to _emit_input_events only when no valid
messages list is found.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Show resolved
Hide resolved
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Outdated
Show resolved
Hide resolved
db0447f to
f3bc46c
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1)
504-519: Chat path does not record model parameters on span.The
generatepath callsset_model_input_attributes(span, instance)to record model configuration (temperature, top_k, decoding_method, etc.), but thechatpath skips this. This results in inconsistent telemetry between generate and chat calls.Proposed fix
if "chat" in name: + set_model_input_attributes(span, instance) messages = kwargs.get("messages") if messages is None and args and isinstance(args[0], list): messages = args[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 504 - 519, The chat branch is missing setting model input attributes like the generate path does; add a call to set_model_input_attributes(span, instance) in the "chat" branch (e.g., right after resolving messages and before iterating messages) so model configuration (temperature, top_k, decoding_method, etc.) is recorded on the span; ensure the call uses the same span and instance variables used elsewhere and remains guarded by span.is_recording() if appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 625-630: The response counter is being incremented inside the
choices loop (so a response with multiple choices increments multiple times);
move the response_counter.add(1, attributes=...) call out of the choices loop so
it runs once per response, keeping the same attributes
(GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id and
SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason) and leaving the
per-choice logic unchanged; update code around response_counter, the choices
iteration, and any local variables used to compute finish_reason/model_id so
they are available when you call response_counter.add.
---
Nitpick comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 504-519: The chat branch is missing setting model input attributes
like the generate path does; add a call to set_model_input_attributes(span,
instance) in the "chat" branch (e.g., right after resolving messages and before
iterating messages) so model configuration (temperature, top_k, decoding_method,
etc.) is recorded on the span; ensure the call uses the same span and instance
variables used elsewhere and remains guarded by span.is_recording() if
appropriate.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
...ages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Show resolved
Hide resolved
96ef9a1 to
e9d8d27
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (5)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (5)
578-581:⚠️ Potential issue | 🟡 MinorAdd type validation for
responseparameter.The function assumes
responseis a dict but doesn't validate this. Ifresponseis not a dict,response.get()will raise anAttributeErrorthat's silently suppressed by@dont_throw, causing telemetry to be lost without warning.Proposed fix
- if not span.is_recording(): + if not span.is_recording() or not isinstance(response, dict): return - model_id = response.get("model_id") or response.get("model") + model_id = response.get("model_id") or response.get("model") or "unknown"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 578 - 581, The code assumes response is a dict and calls response.get(...) which can raise AttributeError if response is not a mapping; update the code around the span.is_recording() check to validate response (e.g., isinstance(response, dict) or isinstance(response, collections.abc.Mapping) ) before calling response.get, and if it is not a mapping set model_id = None (or extract safely using getattr/try/except) so no AttributeError is raised (note this code path is inside the function using span.is_recording()); ensure the safe-lookup replaces the current model_id = response.get("model_id") or response.get("model") expression and preserves existing behavior when response is a mapping.
661-674:⚠️ Potential issue | 🟠 MajorNormalize chat prompt to messages before
_handle_input.
_handle_inputat Line 661 runs before the prompt-to-messages conversion at Lines 669-674. For chat calls using thepromptparameter,_handle_inputwill findkwargs.get("messages")isNoneand skip settinggen_ai.prompt.*attributes.Move the conversion block before
_handle_inputis called.Proposed fix
+ if "chat" in name: + if to_wrap.get("method") == "chat": + if "prompt" in kwargs and "messages" not in kwargs: + prompt = kwargs.pop("prompt") + kwargs["messages"] = [ + {"role": "user", "content": prompt} + ] + _handle_input(span, event_logger, name, instance, args, kwargs) if "generate" in name or "chat" in name: if to_wrap.get("method") == "generate_text_stream": if (raw_flag := kwargs.get("raw_response", None)) is None: kwargs = {**kwargs, "raw_response": True} elif raw_flag is False: kwargs["raw_response"] = True - if to_wrap.get("method") == "chat": - if "prompt" in kwargs and "messages" not in kwargs: - prompt = kwargs.pop("prompt") - kwargs["messages"] = [ - {"role": "user", "content": prompt} - ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 661 - 674, The prompt-to-messages normalization currently runs after calling _handle_input, so _handle_input cannot see converted chat messages; move the block that checks to_wrap.get("method") == "chat" and converts kwargs["prompt"] into kwargs["messages"] (pop "prompt" and set messages = [{"role":"user","content":prompt}]) so it executes before the call to _handle_input(span, event_logger, name, instance, args, kwargs); keep the existing generate_text_stream raw_response handling where it is but ensure the chat conversion is applied earlier so _handle_input can record gen_ai.prompt.* attributes.
586-595:⚠️ Potential issue | 🟠 Major
_handle_chat_responsedoes not emitChoiceEventwhen event logging is enabled.The
event_loggerparameter is passed but never used. In non-legacy mode, chat completions won't emitChoiceEvents, unlike the generate path which calls_emit_response_events.Proposed fix - emit ChoiceEvent for each choice
for index, choice in enumerate(choices): message = choice.get("message", {}) content = message.get("content") finish_reason = choice.get("finish_reason") or "unknown" - if content and should_send_prompts(): + + if should_emit_events() and event_logger: + emit_event( + ChoiceEvent( + index=index, + message=message, + finish_reason=finish_reason, + ), + event_logger, + ) + elif content and should_send_prompts(): _set_span_attribute( span, f"{GenAIAttributes.GEN_AI_COMPLETION}.{index}.content", content, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 586 - 595, The _handle_chat_response function currently ignores the event_logger parameter so no ChoiceEvent is emitted for chat completions; update _handle_chat_response to, inside the choices loop (the for index, choice in enumerate(choices) block), detect when event_logger is provided and not in legacy mode and call the same event-emission logic used by the generate path (e.g., invoke _emit_response_events or the equivalent ChoiceEvent creation) for each choice with the correct index, content, finish_reason and span/context so that ChoiceEvent is emitted per choice; use the existing symbols event_logger, _emit_response_events, ChoiceEvent, and GenAIAttributes.GEN_AI_COMPLETION to locate where to add the call.
597-602:⚠️ Potential issue | 🟠 MajorResponse counter incremented per choice instead of per response.
The
response_counter.add(1)call is inside the choices loop, so a response with N choices increments the counter N times. This inflates metrics and is inconsistent with the generate path. Additionally,model_idandfinish_reasoncan beNone, which may cause issues with metric attributes.Proposed fix - move counter outside the loop and add defaults
choices = response.get("choices", []) + last_finish_reason = "unknown" for index, choice in enumerate(choices): message = choice.get("message", {}) content = message.get("content") - finish_reason = choice.get("finish_reason") + finish_reason = choice.get("finish_reason") or "unknown" + last_finish_reason = finish_reason if content and should_send_prompts(): _set_span_attribute( span, f"{GenAIAttributes.GEN_AI_COMPLETION}.{index}.content", content, ) - if response_counter: - attributes = { - GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id, - SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason, - } - response_counter.add(1, attributes=attributes) + if response_counter: + attributes = { + GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id or "unknown", + SpanAttributes.LLM_RESPONSE_STOP_REASON: last_finish_reason, + } + response_counter.add(1, attributes=attributes)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 597 - 602, The response counter is being incremented inside the choices loop (so N choices => N increments) and uses possibly None attributes; move the response_counter.add(1, ...) call out of the choices iteration so it's executed once per response (locate the choices loop and the response_counter reference), and ensure the attributes passed use safe defaults for GenAIAttributes.GEN_AI_RESPONSE_MODEL and SpanAttributes.LLM_RESPONSE_STOP_REASON (e.g., fallback to "unknown" or empty string when model_id or finish_reason is None) before calling response_counter.add; keep the attribute keys the same (GenAIAttributes.GEN_AI_RESPONSE_MODEL, SpanAttributes.LLM_RESPONSE_STOP_REASON).
516-517:⚠️ Potential issue | 🟠 MajorChat input events won't be emitted correctly.
_emit_input_eventsexpectspromptin kwargs orargs[0], but for chat calls the input is inkwargs["messages"]. This will either fail to find the prompt (returningNoneand causing issues) or emit incorrect events.The chat branch should emit
MessageEventfor each message inmessagesinstead of calling_emit_input_events.Proposed fix
if should_emit_events() and event_logger: - _emit_input_events(args, kwargs, event_logger) + if "chat" in name and isinstance(kwargs.get("messages"), list): + for msg in kwargs["messages"]: + emit_event( + MessageEvent( + content=msg.get("content"), + role=msg.get("role", "user"), + ), + event_logger, + ) + else: + _emit_input_events(args, kwargs, event_logger)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 516 - 517, The current event emission calls _emit_input_events regardless of call type, but chat calls place input in kwargs["messages"]; change the block guarded by should_emit_events() and event_logger to detect chat inputs (e.g., if "messages" in kwargs or kwargs.get("messages") is not None) and, for that path, iterate over kwargs["messages"] and emit a MessageEvent (using event_logger.emit(MessageEvent(...)) or the existing event emission API) for each message; otherwise fall back to calling _emit_input_events(args, kwargs, event_logger). Ensure you reference and use the existing symbols _emit_input_events, should_emit_events, event_logger, MessageEvent and the "messages" kwarg.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 578-581: The code assumes response is a dict and calls
response.get(...) which can raise AttributeError if response is not a mapping;
update the code around the span.is_recording() check to validate response (e.g.,
isinstance(response, dict) or isinstance(response, collections.abc.Mapping) )
before calling response.get, and if it is not a mapping set model_id = None (or
extract safely using getattr/try/except) so no AttributeError is raised (note
this code path is inside the function using span.is_recording()); ensure the
safe-lookup replaces the current model_id = response.get("model_id") or
response.get("model") expression and preserves existing behavior when response
is a mapping.
- Around line 661-674: The prompt-to-messages normalization currently runs after
calling _handle_input, so _handle_input cannot see converted chat messages; move
the block that checks to_wrap.get("method") == "chat" and converts
kwargs["prompt"] into kwargs["messages"] (pop "prompt" and set messages =
[{"role":"user","content":prompt}]) so it executes before the call to
_handle_input(span, event_logger, name, instance, args, kwargs); keep the
existing generate_text_stream raw_response handling where it is but ensure the
chat conversion is applied earlier so _handle_input can record gen_ai.prompt.*
attributes.
- Around line 586-595: The _handle_chat_response function currently ignores the
event_logger parameter so no ChoiceEvent is emitted for chat completions; update
_handle_chat_response to, inside the choices loop (the for index, choice in
enumerate(choices) block), detect when event_logger is provided and not in
legacy mode and call the same event-emission logic used by the generate path
(e.g., invoke _emit_response_events or the equivalent ChoiceEvent creation) for
each choice with the correct index, content, finish_reason and span/context so
that ChoiceEvent is emitted per choice; use the existing symbols event_logger,
_emit_response_events, ChoiceEvent, and GenAIAttributes.GEN_AI_COMPLETION to
locate where to add the call.
- Around line 597-602: The response counter is being incremented inside the
choices loop (so N choices => N increments) and uses possibly None attributes;
move the response_counter.add(1, ...) call out of the choices iteration so it's
executed once per response (locate the choices loop and the response_counter
reference), and ensure the attributes passed use safe defaults for
GenAIAttributes.GEN_AI_RESPONSE_MODEL and
SpanAttributes.LLM_RESPONSE_STOP_REASON (e.g., fallback to "unknown" or empty
string when model_id or finish_reason is None) before calling
response_counter.add; keep the attribute keys the same
(GenAIAttributes.GEN_AI_RESPONSE_MODEL,
SpanAttributes.LLM_RESPONSE_STOP_REASON).
- Around line 516-517: The current event emission calls _emit_input_events
regardless of call type, but chat calls place input in kwargs["messages"];
change the block guarded by should_emit_events() and event_logger to detect chat
inputs (e.g., if "messages" in kwargs or kwargs.get("messages") is not None)
and, for that path, iterate over kwargs["messages"] and emit a MessageEvent
(using event_logger.emit(MessageEvent(...)) or the existing event emission API)
for each message; otherwise fall back to calling _emit_input_events(args,
kwargs, event_logger). Ensure you reference and use the existing symbols
_emit_input_events, should_emit_events, event_logger, MessageEvent and the
"messages" kwarg.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
This reverts commit e9d8d27.
2da9f73 to
8c9ec6e
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1)
625-630:⚠️ Potential issue | 🟠 MajorMove response counter increment outside the choices loop.
response_counteris currently incremented per choice, which overcounts multi-choice chat responses. This should be recorded once per response.Proposed fix
choices = response.get("choices") or [] + last_finish_reason = "unknown" for index, choice in enumerate(choices): message = choice.get("message", {}) content = message.get("content") finish_reason = choice.get("finish_reason") or "unknown" + last_finish_reason = finish_reason @@ - if response_counter: - attributes = { - GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id, - SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason, - } - response_counter.add(1, attributes=attributes) + if response_counter: + attributes = { + GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id, + SpanAttributes.LLM_RESPONSE_STOP_REASON: last_finish_reason, + } + response_counter.add(1, attributes=attributes)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py` around lines 625 - 630, The response_counter.add call is inside the per-choice loop and thus increments for each choice; move the response_counter.add(1, attributes=...) out of the choices loop so it is executed once per response. Keep using the same model_id and finish_reason values you already compute (e.g., the response-level or first-choice finish_reason) and call response_counter.add after the loop (or after choices are assembled) with attributes {GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id, SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason}.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py`:
- Around line 625-630: The response_counter.add call is inside the per-choice
loop and thus increments for each choice; move the response_counter.add(1,
attributes=...) out of the choices loop so it is executed once per response.
Keep using the same model_id and finish_reason values you already compute (e.g.,
the response-level or first-choice finish_reason) and call response_counter.add
after the loop (or after choices are assembled) with attributes
{GenAIAttributes.GEN_AI_RESPONSE_MODEL: model_id,
SpanAttributes.LLM_RESPONSE_STOP_REASON: finish_reason}.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
feat(instrumentation): ...orfix(instrumentation): ....Important
Adds support for Watsonx
chatmethod in instrumentation, handling input/output attributes and metrics.chatmethod inModelInferencein__init__.py._handle_input()to processchatmessages and set span attributes._handle_chat_response()to handle chat responses, setting model ID, content, and token usage attributes.chatmethod toWRAPPED_METHODS_WATSON_AI_VERSION_1for instrumentation._wrap()to handlechatmethod, including prompt conversion to messages.chatresponses in histograms and counters.This description was created by
for cc33da2. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit