Skip to content

UN-3104 feat: Add Unstract Adapter Extension Skill and Azure AI Foundry Adapter#1721

Open
hari-kuriakose wants to merge 15 commits intomainfrom
feat/adater-ops-skill
Open

UN-3104 feat: Add Unstract Adapter Extension Skill and Azure AI Foundry Adapter#1721
hari-kuriakose wants to merge 15 commits intomainfrom
feat/adater-ops-skill

Conversation

@hari-kuriakose
Copy link
Contributor

@hari-kuriakose hari-kuriakose commented Dec 23, 2025

What

  • Add new Claude Code skill for automating adapter development workflows (adapter-ops)
  • Add Azure AI Foundry LLM adapter with full configuration schema
  • Update existing adapter JSON schemas (OpenAI, Azure, Bedrock, Mistral, Ollama) with improved parameter specifications

Why

  • Developer Productivity: Reduce time to create new adapters from hours to minutes with automated scaffolding scripts
  • Consistency: Ensure all new adapters follow the same patterns and conventions
  • Azure AI Foundry Support: Enable users to connect to Azure AI Foundry models via LiteLLM
  • Parameter Accuracy: Improve adapter configurations with correct LiteLLM prefixes and parameter types

How

  1. Skill Structure:

    • SKILL.md: Comprehensive documentation with workflows and quick reference
    • scripts/init_llm_adapter.py: Automated LLM adapter creation with logo support
    • scripts/init_embedding_adapter.py: Automated embedding adapter creation
    • scripts/manage_models.py: Add/remove models from existing adapters
    • scripts/check_adapter_updates.py: Compare adapters against LiteLLM features
    • templates/: Jinja2 templates for adapter files
    • references/: Pattern documentation and JSON schema guide
  2. Azure AI Foundry Adapter:

    • New adapter class in llm1/azure_ai_foundry.py
    • JSON schema in llm1/static/azure_ai_foundry.json
    • Parameter class AzureAIFoundryLLMParameters in base1.py
    • Icon in frontend/public/icons/adapter-icons/AzureAIFoundry.png
  3. Existing Adapter Updates:

    • Updated model prefixes for LiteLLM compatibility
    • Added missing parameters (temperature, max_tokens, etc.)
    • Improved JSON schema validation patterns

Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)

No breaking changes expected:

  • The skill files are additive (new .claude/skills/ directory)
  • Azure AI Foundry is a new adapter, doesn't modify existing adapter behavior
  • JSON schema updates add new fields but maintain backward compatibility with existing configurations
  • Parameter class updates add validation but use safe defaults

Database Migrations

  • None required

Env Config

  • None required

Relevant Docs

Related Issues or PRs

  • N/A

Dependencies Versions

  • No new dependencies added
  • Skill scripts use standard library (pathlib, json, uuid, argparse)

Notes on Testing

  • Skill scripts tested locally for LLM and embedding adapter creation
  • Azure AI Foundry adapter validated against LiteLLM completion API
  • JSON schema changes validated in Unstract UI

Screenshots

N/A (backend changes only)

Checklist

I have read and understood the Contribution Guidelines.

* feat: add azure ai foundry adapter
* feat: update parameters in various adapters
* feat: add azure ai foundry adapter
* feat: update parameters in various adapters
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 23, 2025

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Azure AI Foundry LLM adapter support
    • Enabled reasoning capabilities for Mistral models with configurable effort levels
    • Introduced JSON mode for Ollama to constrain output to valid JSON
    • Added dimensions parameter support for OpenAI and Azure embedding models
    • Enabled AWS profile-based authentication for Bedrock adapter
  • Improvements

    • Updated default models across Mistral and OpenAI adapters
    • Enhanced AWS Bedrock with optional profile authentication option
  • Documentation

    • Added comprehensive adapter development guides and templates

Walkthrough

This pull request introduces a comprehensive adapter operations framework for the Unstract SDK, including new LLM and embedding parameter classes with enhanced capabilities, a complete scaffolding and management system for creating/updating adapters, extensive documentation for adapter patterns, and updated dependency constraints across multiple packages.

Changes

Cohort / File(s) Summary
Parameter and Adapter Classes
unstract/sdk1/src/unstract/sdk1/adapters/base1.py, unstract/sdk1/src/unstract/sdk1/adapters/llm1/azure_ai_foundry.py
Added fields to multiple parameter classes (AWSBedrockLLMParameters: aws_profile_name, model_id; MistralLLMParameters: reasoning_effort; OllamaLLMParameters: json_mode; OpenAI/AzureOpenAI embedding: dimensions). Introduced new AzureAIFoundryLLMParameters class with validation, and new AzureAIFoundryLLMAdapter with metadata methods.
Adapter JSON Schemas
unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/*, unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/*
Updated schemas for Azure, OpenAI, Bedrock, Mistral, and Ollama adapters. Added new properties (dimensions, json_mode, reasoning_effort, aws_profile_name, model_id), modified defaults (Mistral model default, OpenAI embedding default), and introduced conditional schema logic for Mistral reasoning controls.
Adapter Operations Documentation
.claude/skills/adapter-ops/SKILL.md, .claude/skills/adapter-ops/references/*
Added comprehensive documentation: SKILL.md overview, adapter_patterns.md with registration flows and code examples, json_schema_guide.md for schema design, and provider_capabilities.md detailing LiteLLM provider feature matrix.
Adapter Template Files
.claude/skills/adapter-ops/assets/templates/*
Created reusable templates for generating new adapters: llm_adapter.py.template, llm_parameters.py.template, llm_schema.json.template, embedding_adapter.py.template, embedding_parameters.py.template, embedding_schema.json.template with placeholder-driven customization.
Adapter Management Scripts
.claude/skills/adapter-ops/scripts/*
Added init_llm_adapter.py and init_embedding_adapter.py for scaffolding new adapters (file generation, logo handling, parameter class stubs), check_adapter_updates.py for analyzing adapters against LiteLLM capabilities catalog, and manage_models.py for updating model configurations in schemas.
Dependency Updates
unstract/sdk1/pyproject.toml, prompt-service/pyproject.toml
Relaxed version constraints for multiple packages: litellm (1.80.0 → >=1.81.7), llama-index (==0.13.2 → >=0.14.13), and various llama-index vector store integrations to allow patch and minor version updates.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.60% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main additions: a Claude Code skill for adapter development workflows and a new Azure AI Foundry adapter.
Description check ✅ Passed The description comprehensively addresses all required template sections: What (new skill and adapter), Why (productivity and consistency), How (detailed implementation), breaking changes (none expected), database/env/dependencies (none), testing notes, and checklist confirmation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/adater-ops-skill

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (13)
.claude/skills/unstract-adapter-extension/assets/templates/llm_schema.json.template (1)

34-40: Consider using "integer" type instead of "number" with multipleOf: 1.

For integer fields like max_tokens, using "type": "integer" is more idiomatic than "type": "number" with "multipleOf": 1. This applies to max_tokens (line 35), max_retries (line 42), and timeout (line 50) as well.

🔎 Proposed refactor
     "max_tokens": {
-      "type": "number",
+      "type": "integer",
       "minimum": 0,
-      "multipleOf": 1,
       "title": "Maximum Output Tokens",
       "description": "Maximum number of output tokens to limit LLM replies."
     },

Apply the same pattern to max_retries and timeout fields.

.claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py (3)

309-342: Consider more specific exception handling for image processing.

Line 339 catches a broad Exception. While the function handles this gracefully with a print statement, consider catching specific PIL exceptions (e.g., PIL.UnidentifiedImageError, OSError) for better error diagnosis.

🔎 Suggested improvement
     except ImportError:
         if image_data[:8] == b"\x89PNG\r\n\x1a\n":
             output_path.write_bytes(image_data)
             return True
         print("  Note: Install Pillow for better image processing: pip install Pillow")
         return False
-    except Exception as e:
+    except (OSError, ValueError) as e:
         print(f"  Error processing image: {e}")
         return False

355-415: LGTM with minor suggestion: Local logo processing mirrors download logic.

The function appropriately handles SVG conversion, raster resizing, and missing dependencies. Same suggestion as download_and_process_logo: consider more specific exception handling at line 413.


478-490: Consider using copy.deepcopy for schema template cloning.

Line 478 uses json.loads(json.dumps(EMBEDDING_SCHEMA_TEMPLATE)) to clone the template dict. While this ensures JSON serializability, copy.deepcopy() would be more idiomatic and efficient.

🔎 Suggested refactor

Add import:

import copy

Then replace:

-    schema = json.loads(json.dumps(EMBEDDING_SCHEMA_TEMPLATE))
+    schema = copy.deepcopy(EMBEDDING_SCHEMA_TEMPLATE)
unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/openai.json (1)

33-39: Consider using "integer" type for dimensions field.

Similar to the template file, this field uses "type": "number" with "multipleOf": 1 to represent integers. Using "type": "integer" would be more idiomatic.

🔎 Proposed refactor
     "dimensions": {
-      "type": "number",
+      "type": "integer",
       "minimum": 1,
-      "multipleOf": 1,
       "title": "Dimensions",
       "description": "Output embedding dimensions. Only supported by text-embedding-3-* models. Leave empty for default dimensions (1536 for small, 3072 for large)."
     },
.claude/skills/unstract-adapter-extension/assets/templates/embedding_schema.json.template (1)

35-42: Consider using "integer" type for numeric fields.

Both embed_batch_size and timeout use "type": "number" with "multipleOf": 1. Consider using "type": "integer" for clarity, consistent with the suggestion for the LLM template.

🔎 Proposed refactor
     "embed_batch_size": {
-      "type": "number",
+      "type": "integer",
       "minimum": 1,
-      "multipleOf": 1,
       "title": "Embed Batch Size",
       "default": 10,
       "description": "Number of texts to embed per batch."
     },
     "timeout": {
-      "type": "number",
+      "type": "integer",
       "minimum": 0,
-      "multipleOf": 1,
       "title": "Timeout",
       "default": 240,
       "description": "Request timeout in seconds."
     }
unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/azure.json (1)

23-23: LGTM: Azure embedding schema updated with dimensions support.

The model description update and new dimensions property align well with the OpenAI schema changes. The note about text-embedding-3-* model support is helpful.

Same optional refactor suggestion applies: consider using "type": "integer" instead of "type": "number" with "multipleOf": 1 for the dimensions field.

Also applies to: 50-56

.claude/skills/unstract-adapter-extension/references/provider_capabilities.md (1)

45-90: Optional: Add language specifiers to fenced code blocks.

The parameter specification blocks (lines 45-90) lack language specifiers in their fenced code blocks. While this doesn't affect functionality, adding a specifier like text or yaml would satisfy linting tools.

Example:

 ### OpenAI LLM
-```
+```text
 api_key, api_base, model, max_tokens, temperature, top_p
 enable_reasoning, reasoning_effort (o1/o3 models)
 ```
.claude/skills/unstract-adapter-extension/references/adapter_patterns.md (1)

7-25: Consider adding a language identifier to the code block.

The fenced code block for the architecture overview lacks a language specification. While this is a text diagram, adding a language identifier improves markdown rendering consistency.

Suggested fix
-```
+```text
 unstract/sdk1/src/unstract/sdk1/adapters/
 ├── base1.py              # Parameter classes & base adapter
.claude/skills/unstract-adapter-extension/scripts/manage_models.py (2)

40-43: Consider adding error handling for malformed JSON.

load_schema doesn't handle json.JSONDecodeError. If a schema file is corrupted, the script will crash with an unhelpful traceback.

🔎 Proposed fix
 def load_schema(schema_path: Path) -> dict:
     """Load and parse a JSON schema file."""
-    with open(schema_path) as f:
-        return json.load(f)
+    try:
+        with open(schema_path) as f:
+            return json.load(f)
+    except json.JSONDecodeError as e:
+        print(f"Error: Invalid JSON in {schema_path}: {e}")
+        sys.exit(1)

115-123: Setting default to a model not in enum could cause UI validation issues.

set_default_model allows setting a default that may not exist in the enum list. If the schema has an enum constraint, setting a default outside that enum could cause validation errors in the UI.

🔎 Proposed improvement
 def set_default_model(schema: dict, model: str) -> dict:
     """Set the default model."""
     if "properties" not in schema:
         schema["properties"] = {}
     if "model" not in schema["properties"]:
         schema["properties"]["model"] = {"type": "string", "title": "Model"}

+    # Warn if setting default to a value not in enum
+    existing_enum = schema["properties"]["model"].get("enum", [])
+    if existing_enum and model not in existing_enum:
+        print(f"Warning: '{model}' is not in the enum list. Adding it.")
+        existing_enum.append(model)
+        schema["properties"]["model"]["enum"] = existing_enum
+
     schema["properties"]["model"]["default"] = model
     return schema
.claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py (2)

345-347: Avoid catching bare Exception - narrows exception handling.

Catching all exceptions hides specific errors and makes debugging harder. Consider catching specific exceptions like OSError or PIL.UnidentifiedImageError.

🔎 Proposed fix
-    except Exception as e:
-        print(f"  Error processing image: {e}")
-        return False
+    except (OSError, ValueError) as e:
+        print(f"  Error processing image: {e}")
+        return False

464-466: Reconsider default parameter class choice.

Defaulting to AnyscaleLLMParameters when no parameter class is specified seems arbitrary. Consider defaulting to a more generic base class or requiring explicit specification.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 09f62be and 182464b.

⛔ Files ignored due to path filters (1)
  • frontend/public/icons/adapter-icons/AzureAIFoundry.png is excluded by !**/*.png
📒 Files selected for processing (22)
  • .claude/skills/unstract-adapter-extension/SKILL.md
  • .claude/skills/unstract-adapter-extension/assets/templates/embedding_adapter.py.template
  • .claude/skills/unstract-adapter-extension/assets/templates/embedding_parameters.py.template
  • .claude/skills/unstract-adapter-extension/assets/templates/embedding_schema.json.template
  • .claude/skills/unstract-adapter-extension/assets/templates/llm_adapter.py.template
  • .claude/skills/unstract-adapter-extension/assets/templates/llm_parameters.py.template
  • .claude/skills/unstract-adapter-extension/assets/templates/llm_schema.json.template
  • .claude/skills/unstract-adapter-extension/references/adapter_patterns.md
  • .claude/skills/unstract-adapter-extension/references/json_schema_guide.md
  • .claude/skills/unstract-adapter-extension/references/provider_capabilities.md
  • .claude/skills/unstract-adapter-extension/scripts/check_adapter_updates.py
  • .claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py
  • .claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py
  • .claude/skills/unstract-adapter-extension/scripts/manage_models.py
  • unstract/sdk1/src/unstract/sdk1/adapters/base1.py
  • unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/azure.json
  • unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/openai.json
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/azure_ai_foundry.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/azure_ai_foundry.json
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/bedrock.json
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/mistral.json
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/ollama.json
🧰 Additional context used
🧬 Code graph analysis (3)
unstract/sdk1/src/unstract/sdk1/adapters/llm1/azure_ai_foundry.py (2)
unstract/sdk1/src/unstract/sdk1/adapters/base1.py (8)
  • AzureAIFoundryLLMParameters (691-715)
  • BaseAdapter (76-117)
  • get_id (81-82)
  • get_name (86-87)
  • get_description (91-92)
  • get_provider (96-97)
  • get_icon (101-102)
  • get_adapter_type (116-117)
unstract/sdk1/src/unstract/sdk1/constants.py (1)
  • AdapterTypes (16-22)
.claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py (1)
.claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py (7)
  • to_class_name (148-166)
  • to_icon_name (169-174)
  • fetch_url (177-187)
  • search_potential_logo_sources (190-246)
  • download_and_process_logo (249-347)
  • copy_logo (350-420)
  • main (542-649)
.claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py (1)
.claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py (1)
  • to_class_name (147-163)
🪛 LanguageTool
.claude/skills/unstract-adapter-extension/SKILL.md

[grammar] ~6-~6: Ensure spelling is correct
Context: ...and self-hosted models (Ollama). --- # Unstract Adapter Extension Skill This skill pro...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🪛 markdownlint-cli2 (0.18.1)
.claude/skills/unstract-adapter-extension/references/provider_capabilities.md

45-45: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


51-51: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


57-57: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


63-63: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


70-70: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


76-76: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


82-82: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


88-88: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


138-138: Bare URL used

(MD034, no-bare-urls)


139-139: Bare URL used

(MD034, no-bare-urls)


140-140: Bare URL used

(MD034, no-bare-urls)


141-141: Bare URL used

(MD034, no-bare-urls)


142-142: Bare URL used

(MD034, no-bare-urls)


143-143: Bare URL used

(MD034, no-bare-urls)


144-144: Bare URL used

(MD034, no-bare-urls)


145-145: Bare URL used

(MD034, no-bare-urls)

.claude/skills/unstract-adapter-extension/references/adapter_patterns.md

24-24: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


42-42: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


301-301: Bare URL used

(MD034, no-bare-urls)

🪛 Ruff (0.14.10)
.claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py

177-177: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


178-178: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


209-209: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


210-210: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


229-229: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


230-230: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


280-280: subprocess call: check for execution of untrusted input

(S603)


281-293: Starting a process with a partial executable path

(S607)


300-300: Consider moving this statement to an else block

(TRY300)


331-331: Consider moving this statement to an else block

(TRY300)


339-339: Do not catch blind exception: Exception

(BLE001)


360-360: subprocess call: check for execution of untrusted input

(S603)


361-373: Starting a process with a partial executable path

(S607)


380-380: Consider moving this statement to an else block

(TRY300)


406-406: Consider moving this statement to an else block

(TRY300)


413-413: Do not catch blind exception: Exception

(BLE001)

.claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py

183-183: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


184-184: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


215-215: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


216-216: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


235-235: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


236-236: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


286-286: subprocess call: check for execution of untrusted input

(S603)


287-299: Starting a process with a partial executable path

(S607)


306-306: Consider moving this statement to an else block

(TRY300)


337-337: Consider moving this statement to an else block

(TRY300)


345-345: Do not catch blind exception: Exception

(BLE001)


366-366: subprocess call: check for execution of untrusted input

(S603)


367-379: Starting a process with a partial executable path

(S607)


386-386: Consider moving this statement to an else block

(TRY300)


412-412: Consider moving this statement to an else block

(TRY300)


419-419: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (30)
.claude/skills/unstract-adapter-extension/scripts/init_embedding_adapter.py (6)

147-169: LGTM: String conversion utilities are well-implemented.

The special-case handling for known providers and the fallback logic are appropriate.


171-182: LGTM: URL fetching is properly secured.

The function includes appropriate error handling, timeout protection, and a User-Agent header. The static analysis warning (S310) is a false positive for this legitimate logo-fetching use case.


184-241: LGTM: Logo search functionality is well-designed.

The separation of search from download, use of HEAD requests, and appropriate error handling are all good practices.


269-308: LGTM: SVG conversion with ImageMagick is properly secured.

The subprocess call uses a list of arguments (not shell=True), preventing command injection. Input paths are controlled, and missing dependency handling is appropriate.


510-522: Excellent design: Auto-logo search doesn't auto-download.

The decision to search for logos but require explicit user confirmation (via --logo-url) before downloading is a good security and user-experience practice. This prevents unintended downloads of incorrect logos.


532-643: LGTM: CLI interface is well-designed.

The argument parsing, dry-run support, error reporting, and next-steps guidance provide a great user experience. Exit code handling is correct.

unstract/sdk1/src/unstract/sdk1/adapters/embedding1/static/openai.json (1)

18-19: LGTM: Default model updated to newer version.

Updating the default from text-embedding-ada-002 to text-embedding-3-small aligns with current OpenAI recommendations.

unstract/sdk1/src/unstract/sdk1/adapters/llm1/azure_ai_foundry.py (1)

1-40: LGTM: Azure AI Foundry adapter correctly implements all required methods.

The adapter class properly:

  • Inherits from both AzureAIFoundryLLMParameters and BaseAdapter
  • Implements all required static methods with appropriate decorators
  • Uses correct ID format (provider|uuid)
  • Returns proper metadata structure
  • Follows naming conventions
.claude/skills/unstract-adapter-extension/references/provider_capabilities.md (1)

1-145: LGTM: Comprehensive provider capabilities reference.

The documentation provides a well-organized reference for provider features, parameters, authentication methods, and model prefixes. The tables are clear and the feature definitions are helpful.

.claude/skills/unstract-adapter-extension/SKILL.md (1)

1-353: LGTM: Comprehensive and well-structured skill documentation.

This documentation provides excellent coverage of:

  • Supported operations with clear command examples
  • Detailed workflows for each operation type
  • Validation checklists to prevent common errors
  • Maintenance procedures for keeping adapters current
  • Troubleshooting guidance

The inclusion of practical tips (like the GitHub raw URL guidance at lines 76-78) and the emphasis on logo verification (auto-search doesn't auto-download) demonstrate thoughtful design.

.claude/skills/unstract-adapter-extension/assets/templates/llm_schema.json.template (1)

29-29: Use standard JSON Schema format "uri" instead of non-standard "url".

The JSON Schema specification defines standard formats for resource identifiers as "uuid, uri, uri-reference, iri, iri-reference". "url" is not a recognized format. Use "uri" for API base URLs to ensure schema compliance.

-      "format": "url",
+      "format": "uri",
unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/azure_ai_foundry.json (1)

1-59: LGTM!

The JSON schema is well-structured and follows the established patterns from other adapter schemas in the codebase. The required fields (adapter_name, api_key, api_base) are appropriate for Azure AI Foundry, and the use of format: "password" for the API key and format: "uri" for the endpoint URL provides proper UI hints. The integer-like fields correctly use "type": "number" with "multipleOf": 1 for consistency with other adapter schemas.

.claude/skills/unstract-adapter-extension/references/json_schema_guide.md (1)

1-549: LGTM!

This is a comprehensive and well-organized reference guide for creating adapter JSON schemas. The documentation covers all essential field types, provides clear examples for conditional fields using allOf/if/then, and includes practical complete examples for various adapter types (simple LLM, cloud provider, Azure-style, self-hosted, embedding). The best practices section offers valuable guidance for schema authors.

.claude/skills/unstract-adapter-extension/references/adapter_patterns.md (1)

36-562: LGTM!

The adapter patterns reference is comprehensive and well-structured. It covers the complete LLM adapter implementation with parameter classes, adapter classes, and JSON schemas. The patterns for reasoning, thinking, conditional fields, and optional credentials are clearly documented with practical code examples. The "Common Mistakes" section is particularly valuable for developers new to adapter development.

.claude/skills/unstract-adapter-extension/assets/templates/llm_parameters.py.template (1)

1-61: LGTM!

The LLM parameters template is well-designed and follows the documented patterns. The validate_model method correctly implements idempotent prefix handling to avoid double-prefixing. The template provides clear placeholder documentation and appropriate comments for customization points. The structure aligns with the patterns documented in adapter_patterns.md.

.claude/skills/unstract-adapter-extension/assets/templates/embedding_parameters.py.template (1)

1-57: LGTM!

The embedding parameters template correctly differentiates from the LLM template by not applying a model prefix (as noted in the docstring, embedding models typically don't need prefixes). The embed_batch_size field with a default of 10 is appropriate for embedding adapters. The template structure is consistent with the LLM parameters template and follows established patterns.

unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/bedrock.json (1)

22-48: LGTM!

The schema updates appropriately support multiple AWS authentication methods. The updated descriptions for aws_access_key_id and aws_secret_access_key clarify that these can be left empty when using AWS Profile or IAM role authentication. The new aws_profile_name field enables AWS SSO authentication, and the model_id field for Application Inference Profile ARN supports cost tracking. This aligns with the patterns documented in the "Optional Credentials Pattern" section of the adapter patterns reference.

.claude/skills/unstract-adapter-extension/assets/templates/embedding_adapter.py.template (1)

1-50: LGTM!

The embedding adapter template is well-structured and follows the documented patterns. The inheritance order (${PARAM_CLASS} first, then BaseAdapter) is correct as noted in the adapter patterns documentation. All required static methods are present with appropriate return types and the metadata structure matches the expected format for adapter registration.

Also applies to: 55-59

unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/mistral.json (1)

16-99: LGTM!

The schema correctly implements support for Magistral Small 2506 and Magistral Medium models, which are Mistral's first reasoning models released in two variants: a 24B parameter open-source version and a more powerful enterprise version. The conditional allOf block properly gates the reasoning_effort field, showing it only when enable_reasoning is true—an appropriate pattern for these models' reasoning capabilities. The updated model default to mistral-large-latest and inclusion of the Magistral model examples are correct.

unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/ollama.json (1)

28-64: LGTM!

The new properties (max_tokens, temperature, json_mode) and the context_window title fix are well-structured, properly validated, and align with LiteLLM's Ollama parameters. The json_mode boolean provides a user-friendly abstraction that the OllamaLLMParameters.validate() method correctly maps to response_format.

.claude/skills/unstract-adapter-extension/scripts/manage_models.py (1)

158-164: Potential IndexError when enum list is empty.

If model_prop["enum"] is an empty list [], accessing model_prop["enum"][0] on line 161 will raise an IndexError. The ternary condition if model_prop["enum"] handles this, but the logic order should be verified.

Actually, reviewing again - the ternary handles this correctly. The issue is if "default" doesn't exist AND enum is empty, then default becomes "" which is safe.

.claude/skills/unstract-adapter-extension/scripts/check_adapter_updates.py (3)

373-378: list_adapters correctly filters Python files.

The logic uses py_file.name.startswith("__") which checks the filename string, not the Path object directly. This is correct behavior.


49-51: The LITELLM_FEATURES dict requires regular maintenance to reflect current model availability. While "o4-mini" and "gpt-5" are established, released models (not speculative), the hardcoded list should be reviewed for outdated or superseded entries like legacy model identifiers to prevent false positives in adapter update suggestions.


65-66: These model names follow Anthropic's standard versioning format. Anthropic recommends using specific model versions like claude-sonnet-4-5-20250929 in production applications, where the final component is the snapshot date (YYYYMMDD). Claude Sonnet 4.5 is available via the API using the model string claude-sonnet-4-5-20250929, released on September 29, 2025. Similarly, Claude Opus 4.1 was released on August 5, 2025, matching the snapshot date. The format is consistent with earlier models like claude-3-5-sonnet-20241022.

Likely an incorrect or invalid review comment.

unstract/sdk1/src/unstract/sdk1/adapters/base1.py (4)

666-679: LGTM - clean json_mode to response_format conversion.

The json_mode boolean is correctly converted to {"type": "json_object"} format expected by LiteLLM. Using pop on the copied metadata ensures the original isn't mutated and json_mode doesn't leak into the final output.


608-650: LGTM - consistent reasoning pattern with OpenAI/Azure.

The Mistral reasoning implementation follows the same pattern established for OpenAI and Azure adapters, maintaining codebase consistency.


444-448: Good additions for AWS SSO and cost tracking.

aws_profile_name enables SSO authentication workflows, and model_id supports Application Inference Profile for cost allocation. Both have safe None defaults.


691-716: LGTM - azure_ai/ prefix is correct for LiteLLM.

The AzureAIFoundryLLMParameters class correctly uses the azure_ai/ prefix format that LiteLLM expects for Azure AI Foundry models. The validate_model method properly handles both prefixed and unprefixed model names, adding the prefix only when necessary.

.claude/skills/unstract-adapter-extension/scripts/init_llm_adapter.py (2)

286-302: Subprocess command with partial path relies on magick being in PATH.

The script uses magick without an absolute path, which is fine for developer tooling but could fail silently in some environments. The error handling on line 307-311 addresses this adequately.


148-166: Consistent with init_embedding_adapter.py - good for maintainability.

The to_class_name function matches the implementation in the embedding adapter script, ensuring consistent class naming across both adapter types.

* remove claude artifacts from gitignore
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
.claude/skills/adapter-ops/assets/templates/llm_adapter.py.template (1)

51-54: Icon path placeholder mismatch persists.

The past review comment about icon path spacing is still valid. The template uses ${PROVIDER_NAME} in the icon path (line 54), which may contain spaces (e.g., "New Provider"), while the init scripts save icons with spaces removed (e.g., "NewProvider.png"). This creates a mismatch between the generated adapter code and the actual icon filename.

Consider updating the init scripts to use a separate ${ICON_NAME} placeholder that matches the to_icon_name() transformation, or update the template to remove spaces from the provider name in the icon path.

.claude/skills/adapter-ops/assets/templates/embedding_adapter.py.template (1)

51-54: Icon path uses display name which may contain spaces.

This issue was previously identified. The ${PROVIDER_NAME} placeholder (e.g., "Azure AI Foundry") will produce paths with spaces that may fail to load. Consider using ${CLASS_NAME} instead.

.claude/skills/adapter-ops/scripts/init_llm_adapter.py (1)

177-187: URL scheme validation still missing.

This issue was previously flagged. The urlopen call accepts arbitrary URL schemes including file://, which could read local files if a malicious URL is provided via --logo-url.

🧹 Nitpick comments (3)
.claude/skills/adapter-ops/assets/templates/llm_schema.json.template (1)

29-29: Consider using "uri" format for consistency.

Line 29 uses "format": "url" while the embedding template uses "format": "uri". The JSON Schema specification recognizes "uri" as a standard format validator. Consider updating to "uri" for consistency across templates, though "url" may also be supported by some validators.

🔎 Proposed fix
     "api_base": {
       "type": "string",
-      "format": "url",
+      "format": "uri",
       "title": "API Base",
       "default": "",
       "description": "API endpoint URL (optional, uses default if not specified)."
     },
.claude/skills/adapter-ops/scripts/init_llm_adapter.py (2)

148-420: Consider extracting shared utilities to reduce duplication.

The utility functions (to_class_name, to_icon_name, fetch_url, search_potential_logo_sources, download_and_process_logo, copy_logo) are duplicated verbatim in init_embedding_adapter.py. Extracting them to a shared module (e.g., adapter_utils.py) would improve maintainability.

Example structure
# .claude/skills/adapter-ops/scripts/adapter_utils.py
def to_class_name(provider: str) -> str: ...
def to_icon_name(display_name: str) -> str: ...
def fetch_url(url: str, timeout: int = 10) -> bytes | None: ...
# ... other shared functions

Then import in both scripts:

from adapter_utils import to_class_name, to_icon_name, fetch_url, ...

24-32: Path resolution relies on fixed directory structure.

The REPO_ROOT calculation assumes this script is always at .claude/skills/adapter-ops/scripts/. If the skill directory is renamed or moved, paths will break silently.

Consider adding a validation check:

if not (REPO_ROOT / "unstract").exists():
    print(f"Warning: Repository root detection may be incorrect: {REPO_ROOT}")
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 182464b and 98888be.

📒 Files selected for processing (15)
  • .claude/skills/adapter-ops/SKILL.md
  • .claude/skills/adapter-ops/assets/templates/embedding_adapter.py.template
  • .claude/skills/adapter-ops/assets/templates/embedding_parameters.py.template
  • .claude/skills/adapter-ops/assets/templates/embedding_schema.json.template
  • .claude/skills/adapter-ops/assets/templates/llm_adapter.py.template
  • .claude/skills/adapter-ops/assets/templates/llm_parameters.py.template
  • .claude/skills/adapter-ops/assets/templates/llm_schema.json.template
  • .claude/skills/adapter-ops/references/adapter_patterns.md
  • .claude/skills/adapter-ops/references/json_schema_guide.md
  • .claude/skills/adapter-ops/references/provider_capabilities.md
  • .claude/skills/adapter-ops/scripts/check_adapter_updates.py
  • .claude/skills/adapter-ops/scripts/init_embedding_adapter.py
  • .claude/skills/adapter-ops/scripts/init_llm_adapter.py
  • .claude/skills/adapter-ops/scripts/manage_models.py
  • .gitignore
💤 Files with no reviewable changes (1)
  • .gitignore
✅ Files skipped from review due to trivial changes (2)
  • .claude/skills/adapter-ops/references/json_schema_guide.md
  • .claude/skills/adapter-ops/references/adapter_patterns.md
🧰 Additional context used
🧬 Code graph analysis (1)
.claude/skills/adapter-ops/scripts/manage_models.py (2)
.claude/skills/adapter-ops/scripts/init_embedding_adapter.py (1)
  • main (532-639)
.claude/skills/adapter-ops/scripts/init_llm_adapter.py (1)
  • main (542-649)
🪛 LanguageTool
.claude/skills/adapter-ops/SKILL.md

[grammar] ~6-~6: Ensure spelling is correct
Context: ...and self-hosted models (Ollama). --- # Unstract Adapter Extension Skill This skill pro...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🪛 markdownlint-cli2 (0.18.1)
.claude/skills/adapter-ops/references/provider_capabilities.md

45-45: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


51-51: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


57-57: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


63-63: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


70-70: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


76-76: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


82-82: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


88-88: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


138-138: Bare URL used

(MD034, no-bare-urls)


139-139: Bare URL used

(MD034, no-bare-urls)


140-140: Bare URL used

(MD034, no-bare-urls)


141-141: Bare URL used

(MD034, no-bare-urls)


142-142: Bare URL used

(MD034, no-bare-urls)


143-143: Bare URL used

(MD034, no-bare-urls)


144-144: Bare URL used

(MD034, no-bare-urls)


145-145: Bare URL used

(MD034, no-bare-urls)

.claude/skills/adapter-ops/SKILL.md

7-7: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.10)
.claude/skills/adapter-ops/scripts/init_embedding_adapter.py

177-177: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


178-178: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


209-209: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


210-210: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


229-229: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


230-230: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


280-280: subprocess call: check for execution of untrusted input

(S603)


281-293: Starting a process with a partial executable path

(S607)


300-300: Consider moving this statement to an else block

(TRY300)


331-331: Consider moving this statement to an else block

(TRY300)


339-339: Do not catch blind exception: Exception

(BLE001)


360-360: subprocess call: check for execution of untrusted input

(S603)


361-373: Starting a process with a partial executable path

(S607)


380-380: Consider moving this statement to an else block

(TRY300)


406-406: Consider moving this statement to an else block

(TRY300)


413-413: Do not catch blind exception: Exception

(BLE001)

.claude/skills/adapter-ops/scripts/init_llm_adapter.py

183-183: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


184-184: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


215-215: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


216-216: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


235-235: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


236-236: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


286-286: subprocess call: check for execution of untrusted input

(S603)


287-299: Starting a process with a partial executable path

(S607)


306-306: Consider moving this statement to an else block

(TRY300)


337-337: Consider moving this statement to an else block

(TRY300)


345-345: Do not catch blind exception: Exception

(BLE001)


366-366: subprocess call: check for execution of untrusted input

(S603)


367-379: Starting a process with a partial executable path

(S607)


386-386: Consider moving this statement to an else block

(TRY300)


412-412: Consider moving this statement to an else block

(TRY300)


419-419: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (27)
.claude/skills/adapter-ops/SKILL.md (1)

1-350: Excellent comprehensive documentation!

This skill documentation is well-structured and provides clear, actionable guidance for adapter management. The workflows are detailed with concrete examples, and the inclusion of troubleshooting, validation checklists, and maintenance procedures makes this a valuable reference.

.claude/skills/adapter-ops/references/provider_capabilities.md (1)

1-145: Well-organized provider capability reference.

This reference document provides a clear, tabulated overview of LiteLLM provider capabilities. The feature matrices, parameter listings, and authentication methods are comprehensive and align well with the adapter scaffolding system introduced in this PR.

.claude/skills/adapter-ops/scripts/manage_models.py (9)

27-37: Path resolution looks correct.

The path navigation and schema path construction are appropriate for the repository structure.


40-50: Clean JSON I/O functions.

The load and save functions are simple and correct. The trailing newline in save_schema is good for git-friendly diffs.


53-61: LGTM: Clear model information extraction.

The function safely extracts model configuration with appropriate defaults.


64-89: Robust enum addition logic.

The function correctly handles missing structures, avoids duplicates, and sets a sensible default.


92-112: Well-handled model removal with cleanup.

The function correctly handles default updates and removes the enum property when empty.


115-134: Simple and correct field updates.

Both functions properly initialize the schema structure before updating fields.


137-151: convert_to_enum provides guidance rather than automatic conversion.

This function intentionally directs users to use the --action add-enum command rather than automatically converting to an enum. This is reasonable since the script can't know which models should be in the enum list. The function name could be clearer about this behavior (e.g., guide_to_enum), but the current implementation is acceptable.


154-166: Correct enum-to-freetext conversion.

The function properly preserves the default value (using the first enum value as fallback if no explicit default exists) and removes the enum constraint.


169-285: Well-structured CLI with proper action dispatch.

The main function provides a clean CLI interface with appropriate error checking, action dispatch, and dry-run support. The diff-based update approach ensures changes are only written when necessary.

.claude/skills/adapter-ops/scripts/check_adapter_updates.py (6)

26-239: Comprehensive LiteLLM features catalog.

The LITELLM_FEATURES dictionary provides a well-structured reference for known parameters and capabilities across providers. As noted in the maintenance workflow documentation, this catalog should be updated periodically when LiteLLM adds new features.


242-253: Flexible schema loading with pattern matching.

The function correctly handles multiple filename patterns (with and without underscores) to accommodate different provider naming conventions.


256-269: Correctly extracts properties from JSON Schema.

The function handles both standard properties and conditional properties defined in allOf blocks, which is appropriate for schemas with feature-gated fields.


272-364: Thorough adapter analysis with alias handling.

The analyze_adapter function provides comprehensive checking for missing parameters, reasoning/thinking support, and outdated models. The alias filtering (lines 310-323) is particularly valuable for preventing false positives when parameters have alternative names (e.g., api_base vs base_url).


367-423: Clean adapter discovery and reporting.

The list_adapters function correctly identifies Python files, and print_report provides a well-formatted, color-coded summary with actionable information.


426-469: Effective CLI with CI/CD-friendly exit codes.

The main function provides flexible filtering options (adapter type, specific provider) and returns exit code 1 when updates are needed, which is useful for automated checks.

.claude/skills/adapter-ops/assets/templates/embedding_schema.json.template (1)

1-52: Well-structured embedding schema template.

The template provides appropriate defaults and validation constraints. Good use of format: "password" for the API key and format: "uri" for the API base URL to guide UI rendering.

.claude/skills/adapter-ops/assets/templates/llm_schema.json.template (1)

1-58: Solid LLM schema template with appropriate defaults.

The template provides sensible defaults (15-minute timeout, 5 max retries) and proper validation constraints for LLM configuration parameters.

.claude/skills/adapter-ops/assets/templates/llm_adapter.py.template (1)

1-59: Clean adapter template structure.

The template provides a well-organized adapter class skeleton with all required static methods. The placeholder system is clear and documented.

.claude/skills/adapter-ops/assets/templates/embedding_parameters.py.template (1)

1-57: Well-structured embedding parameter template.

The template provides a clean parameter class with proper validation hooks. The comments about field mapping (lines 40-42) and the note that embedding models typically don't need prefixes (line 49) are helpful guidance for developers using this template.

.claude/skills/adapter-ops/assets/templates/llm_parameters.py.template (1)

1-61: LGTM! Template follows established patterns.

The template correctly implements:

  • Idempotent model prefixing in validate_model
  • Pydantic validation flow via model_dump()
  • Clear placeholder documentation
.claude/skills/adapter-ops/assets/templates/embedding_adapter.py.template (1)

1-50: Template structure looks good.

The adapter template correctly implements the required static methods for adapter registration and metadata. The inheritance pattern and method signatures align with existing adapters.

Also applies to: 55-59

.claude/skills/adapter-ops/scripts/init_llm_adapter.py (2)

460-467: Verify that AnyscaleLLMParameters is the intended default.

When neither --add-param-class nor --use-param-class is specified, the adapter defaults to AnyscaleLLMParameters. Confirm this is intentional, or consider using a more generic base class like BaseChatCompletionParameters.


34-145: Implementation looks solid.

The script provides good developer ergonomics:

  • Clear dry-run mode
  • Structured result with errors/warnings separation
  • Helpful next steps guidance
  • Safe handling of existing files

Also applies to: 423-539, 542-649

.claude/skills/adapter-ops/scripts/init_embedding_adapter.py (2)

454-459: Default parameter class is appropriate.

Using OpenAIEmbeddingParameters as the default is reasonable since OpenAI is a common embedding provider and likely has the most generic parameter set.


34-124: Embedding adapter scaffolding implementation looks correct.

The script properly handles:

  • Embedding-specific schema fields (embed_batch_size, api_base)
  • Correct target directories (embedding1/ vs llm1/)
  • Appropriate default parameter class

Also applies to: 417-529, 532-643

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.claude/skills/adapter-ops/scripts/check_adapter_updates.py (2)

242-254: Add error handling for JSON parsing.

If a schema file exists but contains malformed JSON, json.load(f) will raise a JSONDecodeError, causing the script to crash. Consider catching and handling this error to provide a clearer error message.

🔎 Proposed fix
 def load_json_schema(adapter_type: str, provider: str) -> dict | None:
     """Load JSON schema for an adapter."""
     schema_dir = SDK1_ADAPTERS / f"{adapter_type}1" / "static"
 
     # Try common filename patterns
     for filename in [f"{provider}.json", f"{provider.replace('_', '')}.json"]:
         schema_path = schema_dir / filename
         if schema_path.exists():
-            with open(schema_path) as f:
-                return json.load(f)
+            try:
+                with open(schema_path) as f:
+                    return json.load(f)
+            except json.JSONDecodeError as e:
+                print(f"Warning: Failed to parse {schema_path}: {e}", file=sys.stderr)
+                return None
 
     return None

305-307: Add clarifying comment for "adapter_name" exclusion.

The exclusion of "adapter_name" from the missing parameters check is not explained. A brief comment would improve code clarity.

🔎 Proposed fix
     # Find missing parameters
     known_params = set(features.get("known_params", []))
+    # adapter_name is always present in schemas but not a LiteLLM param
     missing = (
         known_params - current_props - {"adapter_name"}
-    )  # adapter_name is always present
+    )
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 98888be and 6ff7b05.

📒 Files selected for processing (1)
  • .claude/skills/adapter-ops/scripts/check_adapter_updates.py
🧰 Additional context used
🧬 Code graph analysis (1)
.claude/skills/adapter-ops/scripts/check_adapter_updates.py (2)
.claude/skills/adapter-ops/scripts/init_embedding_adapter.py (1)
  • main (532-639)
.claude/skills/adapter-ops/scripts/init_llm_adapter.py (1)
  • main (542-649)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (6)
.claude/skills/adapter-ops/scripts/check_adapter_updates.py (6)

1-24: LGTM!

The imports and path resolution are clean and consistent with the related adapter initialization scripts.


256-270: LGTM!

The function correctly extracts properties from both direct properties and conditional allOf/then/properties sections of JSON schemas.


310-324: LGTM!

The alias filtering logic correctly handles common parameter name variations across providers, preventing false positives in the missing parameters report.


367-380: LGTM!

The function correctly discovers adapters by scanning the filesystem and filtering out Python internal files.


382-424: LGTM!

The report formatting is clear and user-friendly, with good visual organization using emoji indicators and grouped output.


426-469: LGTM!

The CLI interface is well-designed with appropriate options for filtering and output formatting. The exit code behavior (returning 1 if updates are needed) enables integration with CI/CD pipelines.

Copy link
Contributor

@chandrasekharan-zipstack chandrasekharan-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - check the code rabbit comments in case they're valid

.prompting/

# Claude
CLAUDE.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hari-kuriakose we should commit a common CLAUDE.md that all of us can use. This might cause our local versions to be tracked in git

Copy link
Contributor

@gaya3-zipstack gaya3-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

@sonarqubecloud
Copy link

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py`:
- Around line 700-706: The code sets result_metadata["response_format"] when
json_mode is true but then constructs OllamaLLMParameters(**result_metadata) and
calls model_dump(), and because OllamaLLMParameters does not declare
response_format that key will be dropped; fix by either (A) adding an optional
field response_format: Optional[dict] (or appropriate type) to the
OllamaLLMParameters model so Pydantic preserves it, or (B) preserve the value
after validation by calling OllamaLLMParameters(**result_metadata).model_dump()
into a dict and, if json_mode, re-inserting the original response_format value
back into the dumped dict before returning; reference result_metadata,
json_mode, response_format, OllamaLLMParameters, and model_dump in your change.
🧹 Nitpick comments (2)
prompt-service/pyproject.toml (1)

14-14: Consider removing redundant llama-index dependency.

Since unstract-sdk1 (line 21) already declares llama-index>=0.14.13 as a dependency, this direct declaration may be redundant. Having the same dependency declared in both places could lead to version conflicts if they diverge in the future.

If prompt-service needs a specific feature from llama-index not covered by sdk1, consider documenting that requirement in a comment.

unstract/sdk1/src/unstract/sdk1/adapters/base1.py (1)

637-675: Reasoning validation logic is duplicated across multiple parameter classes.

This pattern for handling enable_reasoning / reasoning_effort is nearly identical to OpenAILLMParameters.validate() and AzureOpenAILLMParameters.validate(). Consider extracting a shared helper method to reduce duplication and ensure consistency.

Additionally, the logic at lines 668-673 appears redundant: if enable_reasoning is true, reasoning_effort is already included in validation_metadata and thus in validated. The explicit re-assignment at line 671-673 overwrites with the same value.

♻️ Suggested simplification
-        # Clean up result based on reasoning state
-        if not enable_reasoning and "reasoning_effort" in validated:
-            validated.pop("reasoning_effort")
-        elif enable_reasoning:
-            validated["reasoning_effort"] = result_metadata.get(
-                "reasoning_effort", "medium"
-            )
-
-        return validated
+        # Remove reasoning_effort if reasoning is disabled
+        if not enable_reasoning:
+            validated.pop("reasoning_effort", None)
+
+        return validated

@pk-zipstack pk-zipstack changed the title feat: Add Unstract Adapter Extension Skill and Azure AI Foundry Adapter UN-3104 feat: Add Unstract Adapter Extension Skill and Azure AI Foundry Adapter Feb 18, 2026
pk-zipstack and others added 5 commits March 2, 2026 13:57
The merge left 15 packages with duplicate sdist/wheel entries
(litellm, openai, llama-index, qdrant-client, etc.), making the
lock file invalid TOML. Deleted and regenerated with `uv lock`.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The `deprecated` package had version 1.2.18 in its header but
sdist/wheel entries for 1.3.1, causing uv sync to fail with
"inconsistent version" error.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ation

- init_llm_adapter.py: Validate URL scheme is http/https before
  calling urlopen, preventing file:// and other unsafe schemes.
- base1.py: Set response_format after OllamaLLMParameters.model_dump()
  instead of before, since Pydantic silently drops undeclared fields.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Mar 2, 2026

Frontend Lint Report (Biome)

All checks passed! No linting or formatting issues found.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 2, 2026

Test Results

Summary
  • Runner Tests: 11 passed, 0 failed (11 total)
  • SDK1 Tests: 63 passed, 0 failed (63 total)

Runner Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_logs}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup\_skip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_client\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config\_without\_mount}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_run\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_for\_sidecar}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_sidecar\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{11}}$$ $$\textcolor{#23d18b}{\tt{11}}$$
SDK1 Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_non\_retryable\_http\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retryable\_http\_errors}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_post\_method\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_retry\_on\_errors}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_wrapper\_methods\_retry}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_connection\_error\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_timeout\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_non\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_without\_response}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_non\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_other\_exception\_not\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_without\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_successful\_call\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_after\_transient\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_max\_retries\_exceeded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_with\_custom\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_no\_retry\_with\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_exception\_not\_in\_tuple\_not\_retried}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_default\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_environment\_variable\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_max\_retries}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_base\_delay}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_multiplier}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_jitter\_values}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_exceptions\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_predicate\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_both\_exceptions\_and\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_exceptions\_match\_but\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_platform\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_prompt\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_platform\_service\_decorator\_retries\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_prompt\_service\_decorator\_retries\_on\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_warning\_logged\_on\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_info\_logged\_on\_success\_after\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_exception\_logged\_on\_giving\_up}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{63}}$$ $$\textcolor{#23d18b}{\tt{63}}$$

@sonarqubecloud
Copy link

sonarqubecloud bot commented Mar 2, 2026

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/adapter-ops/scripts/init_llm_adapter.py:
- Around line 16-24: The code constructs output filesystem paths directly from
CLI inputs like provider and display_name and needs sanitization: add a helper
(e.g., sanitize_name) that strips or rejects path separators and traversal
patterns, lowercases and slugifies names (fallback to a uuid when empty), and
use Path(...).resolve() to compute the target then verify
target.is_relative_to(base_dir.resolve()) (or compare parents) to prevent
escaping the intended directory; update all places that build
output_path/adapter_dir/filename from variables provider or display_name to call
this sanitizer first and to create files only after the resolved-path
containment check.
- Around line 523-526: The CLI help text for the --auto-logo flag claims it
"downloads" a logo but the code path under auto_logo only searches and prints
results; update the help/argument description to accurately state that
--auto-logo will "search for potential logo sources and report them (no
automatic download)" or, alternatively, implement download behavior in the
auto_logo branch by invoking the existing download routine; locate the auto_logo
variable/flag in init_llm_adapter.py and either change the argparse/help string
for --auto-logo to reflect search-only or add a call to the download helper
(e.g., download_logo or similar) using display_name where the current
print("Searching for potential logo sources for '{display_name}'...") is
emitted.
- Around line 418-422: The fallback in the except ImportError block currently
uses shutil.copy2(source_path, output_path) which may copy non-PNG files into a
.png path; update the ImportError handler in init_llm_adapter.py to first
detect/ensure the source is actually PNG (e.g., open with PIL.Image and check
image.format or use python-magic), and if it is PNG just copy (shutil.copy2),
otherwise open and convert the image to PNG and save to output_path (e.g.,
Image.open(source_path).convert("RGBA").save(output_path, format="PNG")), and
handle conversion/open errors gracefully; reference symbols: shutil.copy2,
source_path, output_path and the except ImportError block.
- Around line 483-503: The current scaffolding writes the adapter file
(adapter_file) even if the schema file (schema_file) already exists (and vice
versa), causing partial creation; change the flow to preflight both targets
first: compute adapter_file and schema_file (using llm_dir, static_dir,
provider), check both .exists() and if either exists append the appropriate
error(s) to result["errors"] and abort without writing anything or adding to
result["files_created"]; only when neither exists proceed to write
adapter_content and the serialized LLM_SCHEMA_TEMPLATE (using display_name) and
then append both created paths to result["files_created"] to ensure atomic
scaffolding.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 14d1147 and 54e9ea4.

⛔ Files ignored due to path filters (1)
  • backend/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • .claude/skills/adapter-ops/scripts/init_llm_adapter.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants