Skip to content

LCORE-1348: Regenerated OpenAPI doc#1486

Open
tisnik wants to merge 1 commit intolightspeed-core:mainfrom
tisnik:lcore-1348-regenerated-openapi-doc
Open

LCORE-1348: Regenerated OpenAPI doc#1486
tisnik wants to merge 1 commit intolightspeed-core:mainfrom
tisnik:lcore-1348-regenerated-openapi-doc

Conversation

@tisnik
Copy link
Copy Markdown
Contributor

@tisnik tisnik commented Apr 12, 2026

Description

LCORE-1348: Regenerated OpenAPI doc

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

  • Assisted-by: N/A
  • Generated by: N/A

Related Tickets & Documents

  • Related Issue #LCORE-1348

Summary by CodeRabbit

  • Documentation
    • Added 404 error response documentation for the inference endpoint
    • Enhanced function tool output format to support structured content (text, images, files) alongside plain text
    • Documented new configuration options for RHEL Lightspeed Command Line Assistant

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 12, 2026

Walkthrough

Documentation updates to the OpenAPI specification and markdown files. Changes include adding 404 error handling to the /v1/infer endpoint, restructuring the output schema to support typed content arrays in addition to strings, moving allow_verbose_infer configuration to a new RlsapiV1Configuration component, and adding documentation for the RHEL Lightspeed Command Line Assistant integration.

Changes

Cohort / File(s) Summary
OpenAPI Schema Updates
docs/openapi.json
Added 404 response definition to /v1/infer endpoint. Modified OpenAIResponseInputFunctionToolCallOutput.output schema from string to anyOf (string | array of discriminated content items: text, image, file).
Documentation & Configuration
docs/openapi.md
Updated root endpoint description. Added new RlsapiV1Configuration component with allow_verbose_infer and quota_subject fields. Removed allow_verbose_infer from Customization. Added rlsapi_v1 global configuration entry for RHEL Lightspeed CLA.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'LCORE-1348: Regenerated OpenAPI doc' is directly related to the changes, which involve regenerating and updating the OpenAPI documentation in docs/openapi.json and docs/openapi.md.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
✨ Simplify code
  • Create PR with simplified code

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/openapi.json`:
- Around line 4773-4792: The OpenAPI response for the /v1/infer path currently
uses a 404 (NotFoundResponse) with an example about "Model not configured", but
RlsapiV1InferRequest has no model/provider field so this is a server-side
configuration/backend error; change the status code from 404 to an appropriate
5xx (e.g., 500 or 503), update the response description to reflect a server
configuration/backend error, and point the schema reference to the existing 5xx
error schema (instead of NotFoundResponse); also update the example payload
key/value to match that 5xx schema so clients will treat it as a server error
rather than a missing-client resource.
- Around line 9180-9208: The discriminator on the output array is unsafe because
the mapped schemas OpenAIResponseInputMessageContentText,
OpenAIResponseInputMessageContentImage, and
OpenAIResponseInputMessageContentFile do not require the "type" property; update
each of those three schema definitions to include "type" in their "required"
array so validators and clients can reliably discriminate, and (optional but
recommended) add "additionalProperties": false to
OpenAIResponseInputMessageContentImage and OpenAIResponseInputMessageContentFile
to prevent unexpected fields.

In `@docs/openapi.md`:
- Line 69: Remove the duplicated sentence "Handle GET requests to the root
(\"/\") endpoint and returns the static HTML index page." in the docs/openapi.md
file (the repeated root-endpoint description) so only the original occurrence
remains; search for the exact sentence text and delete the redundant copy to
avoid duplicate documentation entries.
- Line 5471: Restore the explicit union type for
OpenAIResponseInputFunctionToolCallOutput.output instead of leaving it blank:
update the OpenAPI docs so OpenAIResponseInputFunctionToolCallOutput.output is
documented as "string | StructuredContentItem[]" (or the project's existing
structured-content item type name, e.g., OpenAIResponseStructuredContentItem[]),
describing it accepts either a plain string or an array of structured content
items; ensure the union and the referenced structured item schema are linked so
clients can see the contract.
- Line 6678: Update the OpenAPI docs entry for quota_subject to reflect the
implementation's strict validation rather than a permissive fallback: state that
quota_subject is validated against the current auth mode and that missing
required RH-Identity data will cause validation to fail (preventing quota
enforcement setup) instead of silently falling back to user_id; reference the
enforcement/validation logic in the Config handling of quota_subject and the
/v1/infer request auth checks in the RLS API handler so operators know the
setting may be rejected when RH-Identity info is absent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: fc2fe1dc-c803-4219-b282-f27d4375961a

📥 Commits

Reviewing files that changed from the base of the PR and between 07ca1b1 and 61bdc56.

📒 Files selected for processing (2)
  • docs/openapi.json
  • docs/openapi.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:42.022Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.
📚 Learning: 2026-04-07T14:44:42.022Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:42.022Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.

Applied to files:

  • docs/openapi.md
📚 Learning: 2026-03-02T16:38:30.287Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-02T16:38:30.287Z
Learning: Applies to AGENTS.md : Document agent implementations and their configurations in AGENTS.md

Applied to files:

  • docs/openapi.md
🔇 Additional comments (1)
docs/openapi.md (1)

4467-4467: Good addition of dedicated rlsapi_v1 configuration documentation.

This aligns the API docs with the dedicated CLA /v1/infer configuration model and keeps endpoint-specific controls scoped correctly.

Based on learnings: relocation of allow_verbose_infer to rlsapi_v1 is intentional and backward compatibility is not required in this repository.

Also applies to: 6665-6679

Comment on lines +4773 to +4792
"404": {
"description": "Resource not found",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/NotFoundResponse"
},
"examples": {
"model": {
"value": {
"detail": {
"cause": "Model with ID gpt-4-turbo is not configured",
"response": "Model not found"
}
}
}
}
}
}
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a 5xx here instead of 404.

RlsapiV1InferRequest has no model or provider field, so "model not configured" on /v1/infer can only come from server configuration/backend state, not from a client-addressable missing resource. Documenting it as 404 will send clients down the wrong error-handling path; keep this under the existing 5xx responses unless the endpoint starts accepting explicit model IDs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.json` around lines 4773 - 4792, The OpenAPI response for the
/v1/infer path currently uses a 404 (NotFoundResponse) with an example about
"Model not configured", but RlsapiV1InferRequest has no model/provider field so
this is a server-side configuration/backend error; change the status code from
404 to an appropriate 5xx (e.g., 500 or 503), update the response description to
reflect a server configuration/backend error, and point the schema reference to
the existing 5xx error schema (instead of NotFoundResponse); also update the
example payload key/value to match that 5xx schema so clients will treat it as a
server error rather than a missing-client resource.

Comment on lines +9180 to +9208
"anyOf": [
{
"type": "string"
},
{
"items": {
"oneOf": [
{
"$ref": "#/components/schemas/OpenAIResponseInputMessageContentText"
},
{
"$ref": "#/components/schemas/OpenAIResponseInputMessageContentImage"
},
{
"$ref": "#/components/schemas/OpenAIResponseInputMessageContentFile"
}
],
"discriminator": {
"propertyName": "type",
"mapping": {
"input_file": "#/components/schemas/OpenAIResponseInputMessageContentFile",
"input_image": "#/components/schemas/OpenAIResponseInputMessageContentImage",
"input_text": "#/components/schemas/OpenAIResponseInputMessageContentText"
}
}
},
"type": "array"
}
],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Expected: each mapped schema should require `type` if it is used behind a discriminator.
python - <<'PY'
import json
from pathlib import Path

spec = json.loads(Path("docs/openapi.json").read_text())
schemas = spec["components"]["schemas"]

for name in [
    "OpenAIResponseInputMessageContentText",
    "OpenAIResponseInputMessageContentImage",
    "OpenAIResponseInputMessageContentFile",
]:
    schema = schemas[name]
    print(f"\n## {name}")
    print("required =", schema.get("required"))
    print("additionalProperties =", schema.get("additionalProperties", "<unspecified>"))
    print("type.const =", schema.get("properties", {}).get("type", {}).get("const"))
PY

Repository: lightspeed-core/lightspeed-stack

Length of output: 438


The discriminator-based oneOf is unreliable because type is not required on any mapped schema.

The new output array branch uses a discriminator on type but references OpenAIResponseInputMessageContentText, OpenAIResponseInputMessageContentImage, and OpenAIResponseInputMessageContentFile—none of which list type in their required fields. While all three define type as a constant, the constant constraint does not enforce presence. Without requiring type, clients and validators cannot reliably distinguish between branches; a payload lacking type could ambiguously match multiple schemas.

Add "type" to the required array in each of these three schemas. Optionally, set "additionalProperties": false on Image and File to prevent unexpected fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.json` around lines 9180 - 9208, The discriminator on the output
array is unsafe because the mapped schemas
OpenAIResponseInputMessageContentText, OpenAIResponseInputMessageContentImage,
and OpenAIResponseInputMessageContentFile do not require the "type" property;
update each of those three schema definitions to include "type" in their
"required" array so validators and clients can reliably discriminate, and
(optional but recommended) add "additionalProperties": false to
OpenAIResponseInputMessageContentImage and OpenAIResponseInputMessageContentFile
to prevent unexpected fields.

HTMLResponse: The HTML content of the index page, including a heading,
embedded image with the service icon, and links to the API documentation
via Swagger UI and ReDoc.
Handle GET requests to the root ("/") endpoint and returns the static HTML index page.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicated root-endpoint description.

Line 69 repeats the exact sentence already documented earlier for the same section, which adds noise to generated docs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.md` at line 69, Remove the duplicated sentence "Handle GET
requests to the root (\"/\") endpoint and returns the static HTML index page."
in the docs/openapi.md file (the repeated root-endpoint description) so only the
original occurrence remains; search for the exact sentence text and delete the
redundant copy to avoid duplicate documentation entries.

|-------|------|-------------|
| call_id | string | |
| output | string | |
| output | | |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore explicit type information for OpenAIResponseInputFunctionToolCallOutput.output.

Line 5471 now shows an empty type, which hides the contract for clients. Please document the union explicitly (string or structured content array) instead of leaving it blank.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.md` at line 5471, Restore the explicit union type for
OpenAIResponseInputFunctionToolCallOutput.output instead of leaving it blank:
update the OpenAPI docs so OpenAIResponseInputFunctionToolCallOutput.output is
documented as "string | StructuredContentItem[]" (or the project's existing
structured-content item type name, e.g., OpenAIResponseStructuredContentItem[]),
describing it accepts either a plain string or an array of structured content
items; ensure the union and the referenced structured item schema are linked so
clients can see the contract.

| Field | Type | Description |
|-------|------|-------------|
| allow_verbose_infer | boolean | Allow /v1/infer to return extended metadata (tool_calls, rag_chunks, token_usage) when the client sends "include_metadata": true. Should NOT be enabled in production. If production use is needed, consider RBAC-based access control via an Action.RLSAPI_V1_INFER authorization rule. |
| quota_subject | | Identity field used as the quota subject for /v1/infer. When set, token quota enforcement is enabled for this endpoint. Requires quota_handlers to be configured. "org_id" and "system_id" require rh-identity authentication; falls back to user_id when rh-identity data is unavailable. |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix quota_subject behavior description to match implementation.

Line 6678 states it “falls back to user_id when rh-identity data is unavailable,” but the implementation validates quota_subject against auth mode and can fail fast when required identity data is missing to prevent quota bypass. This doc text is misleading for operators configuring enforcement.

Based on learnings and provided snippets: src/models/config.py:2018-2060 and src/app/endpoints/rlsapi_v1.py:491-526 show strict validation/fail-fast behavior rather than a permissive fallback.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.md` at line 6678, Update the OpenAPI docs entry for
quota_subject to reflect the implementation's strict validation rather than a
permissive fallback: state that quota_subject is validated against the current
auth mode and that missing required RH-Identity data will cause validation to
fail (preventing quota enforcement setup) instead of silently falling back to
user_id; reference the enforcement/validation logic in the Config handling of
quota_subject and the /v1/infer request auth checks in the RLS API handler so
operators know the setting may be rejected when RH-Identity info is absent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant