-
Notifications
You must be signed in to change notification settings - Fork 79
LCORE-1348: Regenerated OpenAPI doc #1486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4770,6 +4770,26 @@ | |
| } | ||
| } | ||
| }, | ||
| "404": { | ||
| "description": "Resource not found", | ||
| "content": { | ||
| "application/json": { | ||
| "schema": { | ||
| "$ref": "#/components/schemas/NotFoundResponse" | ||
| }, | ||
| "examples": { | ||
| "model": { | ||
| "value": { | ||
| "detail": { | ||
| "cause": "Model with ID gpt-4-turbo is not configured", | ||
| "response": "Model not found" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "413": { | ||
| "description": "Prompt is too long", | ||
| "content": { | ||
|
|
@@ -9157,7 +9177,35 @@ | |
| "title": "Call Id" | ||
| }, | ||
| "output": { | ||
| "type": "string", | ||
| "anyOf": [ | ||
| { | ||
| "type": "string" | ||
| }, | ||
| { | ||
| "items": { | ||
| "oneOf": [ | ||
| { | ||
| "$ref": "#/components/schemas/OpenAIResponseInputMessageContentText" | ||
| }, | ||
| { | ||
| "$ref": "#/components/schemas/OpenAIResponseInputMessageContentImage" | ||
| }, | ||
| { | ||
| "$ref": "#/components/schemas/OpenAIResponseInputMessageContentFile" | ||
| } | ||
| ], | ||
| "discriminator": { | ||
| "propertyName": "type", | ||
| "mapping": { | ||
| "input_file": "#/components/schemas/OpenAIResponseInputMessageContentFile", | ||
| "input_image": "#/components/schemas/OpenAIResponseInputMessageContentImage", | ||
| "input_text": "#/components/schemas/OpenAIResponseInputMessageContentText" | ||
| } | ||
| } | ||
| }, | ||
| "type": "array" | ||
| } | ||
| ], | ||
|
Comment on lines
+9180
to
+9208
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
set -euo pipefail
# Expected: each mapped schema should require `type` if it is used behind a discriminator.
python - <<'PY'
import json
from pathlib import Path
spec = json.loads(Path("docs/openapi.json").read_text())
schemas = spec["components"]["schemas"]
for name in [
"OpenAIResponseInputMessageContentText",
"OpenAIResponseInputMessageContentImage",
"OpenAIResponseInputMessageContentFile",
]:
schema = schemas[name]
print(f"\n## {name}")
print("required =", schema.get("required"))
print("additionalProperties =", schema.get("additionalProperties", "<unspecified>"))
print("type.const =", schema.get("properties", {}).get("type", {}).get("const"))
PYRepository: lightspeed-core/lightspeed-stack Length of output: 438 The discriminator-based The new Add 🤖 Prompt for AI Agents |
||
| "title": "Output" | ||
| }, | ||
| "type": { | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -66,6 +66,7 @@ Returns: | |
| HTMLResponse: The HTML content of the index page, including a heading, | ||
| embedded image with the service icon, and links to the API documentation | ||
| via Swagger UI and ReDoc. | ||
| Handle GET requests to the root ("/") endpoint and returns the static HTML index page. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remove duplicated root-endpoint description. Line 69 repeats the exact sentence already documented earlier for the same section, which adds noise to generated docs. 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
|
|
||
|
|
@@ -4463,6 +4464,7 @@ Global service configuration. | |
| | a2a_state | | Configuration for A2A protocol persistent state storage. | | ||
| | quota_handlers | | Quota handlers configuration | | ||
| | azure_entra_id | | | | ||
| | rlsapi_v1 | | Configuration for the rlsapi v1 /infer endpoint used by the RHEL Lightspeed Command Line Assistant (CLA). | | ||
| | splunk | | Splunk HEC configuration for sending telemetry events. | | ||
| | deployment_environment | string | Deployment environment name (e.g., 'development', 'staging', 'production'). Used in telemetry events. | | ||
| | rag | | Configuration for all RAG strategies (inline and tool-based). | | ||
|
|
@@ -4727,7 +4729,6 @@ Service customization. | |
| | agent_card_path | | | | ||
| | agent_card_config | | | | ||
| | custom_profile | | | | ||
| | allow_verbose_infer | boolean | | | ||
|
|
||
|
|
||
| ## DatabaseConfiguration | ||
|
|
@@ -5467,7 +5468,7 @@ This represents the output of a function call that gets passed back to the model | |
| | Field | Type | Description | | ||
| |-------|------|-------------| | ||
| | call_id | string | | | ||
| | output | string | | | ||
| | output | | | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Restore explicit type information for Line 5471 now shows an empty type, which hides the contract for clients. Please document the union explicitly (string or structured content array) instead of leaving it blank. 🤖 Prompt for AI Agents |
||
| | type | string | | | ||
| | id | | | | ||
| | status | | | | ||
|
|
@@ -6661,6 +6662,22 @@ Attributes: | |
| | version | string | Command line assistant version | | ||
|
|
||
|
|
||
| ## RlsapiV1Configuration | ||
|
|
||
|
|
||
| Configuration for the rlsapi v1 /infer endpoint. | ||
|
|
||
| Settings specific to the RHEL Lightspeed Command Line Assistant (CLA) | ||
| stateless inference endpoint. Kept separate from shared configuration | ||
| sections so that CLA-specific options do not affect other endpoints. | ||
|
|
||
|
|
||
| | Field | Type | Description | | ||
| |-------|------|-------------| | ||
| | allow_verbose_infer | boolean | Allow /v1/infer to return extended metadata (tool_calls, rag_chunks, token_usage) when the client sends "include_metadata": true. Should NOT be enabled in production. If production use is needed, consider RBAC-based access control via an Action.RLSAPI_V1_INFER authorization rule. | | ||
| | quota_subject | | Identity field used as the quota subject for /v1/infer. When set, token quota enforcement is enabled for this endpoint. Requires quota_handlers to be configured. "org_id" and "system_id" require rh-identity authentication; falls back to user_id when rh-identity data is unavailable. | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix Line 6678 states it “falls back to user_id when rh-identity data is unavailable,” but the implementation validates Based on learnings and provided snippets: 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| ## RlsapiV1Context | ||
|
|
||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a 5xx here instead of 404.
RlsapiV1InferRequesthas nomodelorproviderfield, so "model not configured" on/v1/infercan only come from server configuration/backend state, not from a client-addressable missing resource. Documenting it as 404 will send clients down the wrong error-handling path; keep this under the existing 5xx responses unless the endpoint starts accepting explicit model IDs.🤖 Prompt for AI Agents