Skip to content

feat: Add HubSpot integration plugin hooks for user journey tracking#1806

Merged
gaya3-zipstack merged 24 commits intomainfrom
feat/plugin-integrations
Mar 9, 2026
Merged

feat: Add HubSpot integration plugin hooks for user journey tracking#1806
gaya3-zipstack merged 24 commits intomainfrom
feat/plugin-integrations

Conversation

@hari-kuriakose
Copy link
Copy Markdown
Contributor

@hari-kuriakose hari-kuriakose commented Feb 24, 2026

What

  • Add HubSpot plugin notification hooks for tracking user journey events:
    • First Prompt Studio project creation
    • First prompt run
    • First document upload
    • First tool export
    • First API deployment
  • Remove hubspot_signup_api stub method from AuthenticationService (moved to plugin)

Why

  • Enable HubSpot CRM integration for tracking key user milestones within the platform
  • Allow enterprise deployments to receive notifications when users complete important first-time actions
  • Clean up authentication service by removing stub method that is now handled by the plugin

How

  • Added _notify_hubspot_first_project, _notify_hubspot_first_prompt_run, _notify_hubspot_first_document, _notify_hubspot_first_tool_export methods in prompt_studio_core_v2/views.py
  • Added _notify_hubspot_first_api_deploy method in api_deployment_views.py
  • All hooks use get_plugin("hubspot") to check for plugin availability before notifying
  • Hooks track whether this is the first action for the org using count checks before/after the action
  • Notifications are wrapped in try/except to log warnings on failure without affecting main functionality

Can this PR break any existing features?

No breaking changes:

  • All HubSpot notifications use get_plugin() which returns None when plugin is unavailable
  • Notification failures are caught and logged as warnings without affecting the main request
  • Removed stub method was not implemented (raised MethodNotImplemented)

Database Migrations

  • None

Env Config

  • None required for core functionality
  • HubSpot integration requires plugin configuration when enabled

Relevant Docs

  • N/A

Related Issues or PRs

  • N/A

Dependencies Versions

  • No new dependencies

Notes on Testing

  • Verify HubSpot notifications fire correctly when plugin is available (check logs)
  • Confirm no errors when HubSpot plugin is not configured
  • Test that main functionality (project creation, prompt runs, uploads, exports, API deployments) works regardless of HubSpot plugin availability

Screenshots

N/A

Checklist

I have read and understood the Contribution Guidelines.

🤖 Generated with Claude Code

hari-kuriakose and others added 16 commits January 9, 2026 23:13
## What

- Add dynamic plugin loading support to OSS codebase
- Enable enterprise components to be loaded at runtime without modifying tracked files

## Why

- Enterprise code was overwriting git-tracked OSS files causing dirty git state
- Need clean separation between OSS and enterprise codebases
- OSS should work independently without enterprise components

## How

- `unstract_migrations.py`: Uses try/except ImportError to load from `pluggable_apps.migrations_ext`
- `api_hub_usage_utils.py`: Uses try/except ImportError to load from `plugins.verticals_usage`
- `utils.py`: Uses try/except ImportError to load from `pluggable_apps.manual_review_v2` and `plugins.workflow_manager.workflow_v2.rule_engine`
- `backend.Dockerfile`: Conditional install of `requirements.txt` if present

## Can this PR break any existing features. If yes, please list possible items. If no, please explain why.

- No. The changes add optional plugin loading that gracefully falls back to default behavior when plugins are not present. Existing OSS functionality is preserved.

## Database Migrations

- None

## Env Config

- None

## Relevant Docs

- None

## Related Issues or PRs

- None

## Dependencies Versions

- None

## Notes on Testing

- OSS build: Verify app starts and works without enterprise plugins
- Enterprise build: Verify plugins are loaded and function correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Removed redundant import of random and exception handling for manual_review_v2.

Signed-off-by: Hari John Kuriakose <hari@zipstack.com>
- Add explicit loadbalancer port labels for backend (8000) and frontend (3000)
  services in docker-compose to ensure proper Traefik routing
- Rename spawned_services to ignored_services for clarity
- Extend ignored_services list to include tool-classifier, tool-text_extractor,
  and worker-unified services that don't need environment setup

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Update Traefik port label to 80 to match nginx and fix Dockerfile to use
BUILD_CONTEXT_PATH for the runtime config script in both dev and prod stages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…file

Convert BUILD_CONTEXT_PATH from environment variable to build argument
for proper Docker multi-stage build support. ARGs must be declared
globally and re-declared in each stage that needs them.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add new integrations plugin category under backend/plugins/integrations/
- Create HubSpot plugin with event-based contact updates
- Track user milestone events: project creation, document upload,
  prompt run, tool export, and API deployment
- Plugin validates is_first_for_org flag and first org member status
- Remove unused hubspot_signup_api() stub from authentication_service
- Update subscription_helper to use plugin pattern for form submissions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…g set (#1785)

Fix two interacting bugs that prevented TTL from propagating to HITL
queue records:

1. WorkflowUtil.get_hitl_ttl_seconds was an OSS stub that always returned
   None. Now delegates to get_hitl_ttl_seconds_by_workflow via try/except
   import, falling back to None in OSS environments.

2. _push_to_queue_for_api_deployment never fetched TTL. Now mirrors the
   connector path by calling WorkflowUtil.get_hitl_ttl_seconds and passing
   ttl_seconds through to _create_queue_result and
   _enqueue_to_packet_or_regular_queue.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: Add dynamic plugin loading for enterprise components

- Add dynamic plugin loading support to OSS codebase
- Enable enterprise components to be loaded at runtime without modifying tracked files

- Enterprise code was overwriting git-tracked OSS files causing dirty git state
- Need clean separation between OSS and enterprise codebases
- OSS should work independently without enterprise components

- `unstract_migrations.py`: Uses try/except ImportError to load from `pluggable_apps.migrations_ext`
- `api_hub_usage_utils.py`: Uses try/except ImportError to load from `plugins.verticals_usage`
- `utils.py`: Uses try/except ImportError to load from `pluggable_apps.manual_review_v2` and `plugins.workflow_manager.workflow_v2.rule_engine`
- `backend.Dockerfile`: Conditional install of `requirements.txt` if present

- No. The changes add optional plugin loading that gracefully falls back to default behavior when plugins are not present. Existing OSS functionality is preserved.

- None

- None

- None

- None

- None

- OSS build: Verify app starts and works without enterprise plugins
- Enterprise build: Verify plugins are loaded and function correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactor: Use get_plugin() for API Hub usage utilities

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Refactor random sampling logic in utils.py

Removed redundant import of random and exception handling for manual_review_v2.

Signed-off-by: Hari John Kuriakose <hari@zipstack.com>

* fix: Add Traefik port labels and clean up service ignore list

- Add explicit loadbalancer port labels for backend (8000) and frontend (3000)
  services in docker-compose to ensure proper Traefik routing
- Rename spawned_services to ignored_services for clarity
- Extend ignored_services list to include tool-classifier, tool-text_extractor,
  and worker-unified services that don't need environment setup

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Update frontend Docker config for nginx serving

Update Traefik port label to 80 to match nginx and fix Dockerfile to use
BUILD_CONTEXT_PATH for the runtime config script in both dev and prod stages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Use ARG instead of ENV for BUILD_CONTEXT_PATH in frontend Dockerfile

Convert BUILD_CONTEXT_PATH from environment variable to build argument
for proper Docker multi-stage build support. ARGs must be declared
globally and re-declared in each stage that needs them.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: typo in ignored services var name

* fix: handle script execution via entrypoint

---------

Signed-off-by: Hari John Kuriakose <hari@zipstack.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* feat: add auth error code and frontend error display

* fix: run frontend dev server on port 80 and add signup handler

- Set PORT=80 env var in frontend Dockerfile development stage
- Change EXPOSE from 3000 to 80 to match production nginx
- Add handleSignup function and pass to LoginForm component

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: add ARG declaration to Dockerfile stages using BUILD_CONTEXT_PATH

SonarQube flagged that ARG must be declared in each Docker build stage
where it is used. Added the missing ARG BUILD_CONTEXT_PATH to both
development and builder stages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
* [MISC] Improve Docker dev experience: separate debugpy, optimize memory, add V2 workers support

- Move debugpy to optional compose.debug.yaml for cleaner default dev setup
- Update compose.override.yaml with memory-optimized settings (1 worker, 2 threads)
- Add V2 workers configuration with build definitions
- Move V1 workers to optional workers-v1 profile
- Use modern uv run python -Xfrozen_modules=off pattern for services
- Updated README with compose.debug.yaml usage instructions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* [MISC] Add Docker Compose version requirement and scheduler comment

- Add Docker Compose 2.24.4+ requirement note for !override directive
- Add comment explaining why worker-log-history-scheduler-v2 has no watch

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* [MISC] Fix debug ports table in README

- Fix port order: runner=5679, platform=5680, prompt=5681
- Remove x2text-service (not in compose.debug.yaml)
- Add V2 workers debug ports (5682-5688)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* [MISC] Simplify sample.compose.override.yaml

Remove db image override and V1 worker command overrides from
sample file. Users should reference compose.override.yaml for
actual dev setup.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
… client (#1798)

* Fix production queryset performance and retry amplification

Resolves 39.8s GET /internal/v1/file-execution/<uuid>/ latency by:
- Removing 7 debug COUNT(*) full table scans from get_queryset()
- Adding get_object() O(1) PK lookup override in ViewSet
- Adding @with_cache decorators to pipeline fetch methods
- Adding pipeline_data_key() to prevent cache key collisions
- Fixing urllib3 retry amplification: clear status_forcelist, let app-level retries handle status codes
- Use config defaults instead of hardcoded retry values

Root cause: Count queries on every request + retry storm (urllib3 × app-level).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address PR #1798 review comments

- Chain exception context in get_object() (Ruff B904)
- Fix cache key collision: get_workflow_definition() now uses
  CacheType.WORKFLOW_DEFINITION instead of CacheType.WORKFLOW to avoid
  type mismatch with get_workflow() sharing the same cache key
- Include check_active in get_pipeline_data() cache key to prevent
  active-status bypass when check_active=False is cached first
- Refactor status() and update_hash() to use self.get_object() instead
  of duplicating manual queryset + org filtering; add except APIException
  pass-through so NotFound propagates as 404 not 500

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…duct scope (#1803)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Signed-off-by: Hari John Kuriakose <hari@zipstack.com>
Co-authored-by: Claude <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 24, 2026

Summary by CodeRabbit

  • New Features

    • Added HubSpot event notifications for API deployments, prompt runs, document uploads, project creation, and tool exports. Notifications determine "first for org" state and are designed not to interfere with normal flows.
  • Chores

    • Removed a previously exposed HubSpot signup hook from the authentication service.

Walkthrough

Removed an unused HubSpot signup hook from the authentication service and added HubSpot notification calls that run (non-blocking) on first-time API deployments, Prompt Studio project creation, prompt runs, document uploads, and tool exports using pre-action counts.

Changes

Cohort / File(s) Summary
Authentication Service Cleanup
backend/account_v2/authentication_service.py
Deleted the unimplemented hubspot_signup_api method from AuthenticationService.
API Deployment Notifications
backend/api_v2/api_deployment_views.py
Added pre-create counting and direct calls to notify_hubspot_event around API deployment creation paths to notify HubSpot when an org's first API deployment occurs.
Prompt Studio Notifications
backend/prompt_studio/prompt_studio_core_v2/views.py, backend/prompt_studio/.../prompt_studio_output_manager_v2/models.py, backend/prompt_studio/.../prompt_studio_registry_v2/models.py
Added pre-action counts and HubSpot notification calls for project create, prompt run, document upload, and tool export flows; introduced imports for PromptStudioOutputManager and PromptStudioRegistry.
HubSpot Utility
backend/utils/hubspot_notify.py
New utility notify_hubspot_event(user, event_name, is_first_for_org, action_label) that resolves a hubspot plugin and calls its service.update_contact; errors are caught and logged as warnings.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AppServer as AppServer
    participant DB as Database
    participant HubSpot as HubSpotAPI

    Note over User,AppServer: First-time action (deploy / prompt / upload / export)
    User->>AppServer: request create/run/upload/export
    AppServer->>DB: count existing items for org (pre-action)
    DB-->>AppServer: count
    AppServer->>DB: create resource (deployment/output/doc/export)
    DB-->>AppServer: created resource
    AppServer->>HubSpot: notify_hubspot_event(event_name, is_first = count==0, action_label)
    HubSpot-->>AppServer: ack / error
    alt HubSpot error
        AppServer->>AppServer: log warning (do not fail request)
    end
    AppServer-->>User: success response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 79.55% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding HubSpot integration plugin hooks for tracking user journey events, which is the primary focus of the PR.
Description check ✅ Passed The PR description comprehensively covers all required sections from the template with detailed information about what changes are being made, why, how they're implemented, and testing considerations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/plugin-integrations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
backend/account_v2/constants.py (1)

71-95: ⚠️ Potential issue | 🟡 Minor

Add INE003 to error handling in authentication_controller.py.

The error code INE003 is defined in constants.py and already mapped in the frontend GenericError component, but it's missing from the error handling check in authentication_controller.py. Add it to the conditional set at lines 127-131 alongside INE001 and INE002 so the error response includes the code and domain data when thrown.

Current error handling (incomplete)
if hasattr(ex, "code") and ex.code in {
    AuthorizationErrorCode.USF,
    AuthorizationErrorCode.USR,
    AuthorizationErrorCode.INE001,
    AuthorizationErrorCode.INE002,
    AuthorizationErrorCode.INS,
}:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/account_v2/constants.py` around lines 71 - 95, The authentication
error check in authentication_controller.py omits AuthorizationErrorCode.INE003
so responses for that invalid-email-forbidden case don't include the code and
domain data; update the conditional that currently contains
AuthorizationErrorCode.INE001 and AuthorizationErrorCode.INE002 to also include
AuthorizationErrorCode.INE003 (i.e., add AuthorizationErrorCode.INE003 to the
set checked by the if that returns the error response) so the handler will
return the error code and domain details when INE003 is raised.
backend/workflow_manager/file_execution/internal_views.py (3)

596-693: ⚠️ Potential issue | 🔴 Critical

Bug: status query-param variable shadows the rest_framework.status module, breaking the error handler.

Line 604 assigns status = request.query_params.get("status"), which overwrites the status module imported at line 8. When the except block at line 692 tries to use status.HTTP_500_INTERNAL_SERVER_ERROR, it will raise AttributeError because status is now a string or None.

🐛 Proposed fix — rename the local variable
-            status = request.query_params.get("status")
+            status_filter = request.query_params.get("status")
             ...
-            if status:
-                file_executions = file_executions.filter(status=status)
+            if status_filter:
+                file_executions = file_executions.filter(status=status_filter)
             ...
                 "filters_applied": {
                     ...
-                    "status": status,
+                    "status": status_filter,
                 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/workflow_manager/file_execution/internal_views.py` around lines 596 -
693, The local variable status in the get method shadows rest_framework.status
used in the except block; rename the query param variable (e.g., status_param or
status_filter) wherever you read it from request.query_params.get("status") and
wherever you filter the queryset (currently using status in
file_executions.filter(status=status)) so the except block can still reference
status.HTTP_500_INTERNAL_SERVER_ERROR; update all references inside the get
method (filtering, filters_applied dict, and any logging) to the new name.

468-471: ⚠️ Potential issue | 🟡 Minor

Potential NameError if update_data is not a dict.

If an item in status_updates is not a dict, update_data.get(...) at line 422 raises AttributeError, and the except handler at line 470 references file_execution_id which was never assigned. This causes a NameError, losing the original error.

🛡️ Defensive fix
                     except Exception as e:
                         failed_updates.append(
-                            {"file_execution_id": file_execution_id, "error": str(e)}
+                            {"file_execution_id": update_data.get("file_execution_id") if isinstance(update_data, dict) else None, "error": str(e)}
                         )

Same pattern applies in FileExecutionBatchHashUpdateAPIView (line 567).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/workflow_manager/file_execution/internal_views.py` around lines 468 -
471, The except block can raise a NameError because update_data may not be a
dict and file_execution_id may never be assigned; fix by defensively extracting
file_execution_id before the try (e.g., set file_execution_id =
update_data.get("file_execution_id") if isinstance(update_data, dict) else None)
and validate update_data is a dict at the top of the loop (raise or record a
typed error if not) so update_data.get(...) can't raise AttributeError; apply
the same defensive extraction/validation in the status_updates loop in
internal_views.py and in FileExecutionBatchHashUpdateAPIView to ensure
failed_updates.append(...) always has a defined file_execution_id and the
original exception is preserved.

281-376: ⚠️ Potential issue | 🔴 Critical

Catching per-item exceptions inside a single transaction.atomic() block will break on first failure.

When an exception is raised and caught inside a transaction.atomic() context (line 368), Django marks the current transaction as unusable. All subsequent queries in the loop will fail with TransactionManagementError: An error occurred in the current transaction. You can't perform queries until the end of the 'atomic' block.

Wrap each iteration in its own savepoint instead:

🐛 Proposed fix
-            with transaction.atomic():
-                for file_execution_data in file_executions:
-                    try:
+            for file_execution_data in file_executions:
+                try:
+                    with transaction.atomic():
                         # ... existing per-item logic ...
-                    except Exception as e:
-                        failed_creations.append(...)
+                except Exception as e:
+                    failed_creations.append(...)

The same issue exists in FileExecutionBatchStatusUpdateAPIView (line 419) and FileExecutionBatchHashUpdateAPIView (line 515).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/workflow_manager/file_execution/internal_views.py` around lines 281 -
376, The loop currently wraps all items in one transaction.atomic() so any
per-item exception marks the whole transaction unusable; change to use a
per-item savepoint by moving the transaction.atomic() inside the loop (i.e., for
file_execution_data in file_executions: with transaction.atomic(): try: ...
except Exception as e: ...), so each iteration gets its own atomic/savepoint and
failures don’t break subsequent iterations; apply the same change to the other
batch handlers FileExecutionBatchStatusUpdateAPIView and
FileExecutionBatchHashUpdateAPIView and ensure error handling still appends to
failed_creations while continuing the loop (references:
WorkflowFileExecution.objects.get_or_create_file_execution,
file_execution/file_hash variables, and the three mentioned view classes).
backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py (1)

78-103: ⚠️ Potential issue | 🟠 Major

ttl_seconds parameter is accepted but never forwarded to store_headers.

Line 82 declares ttl_seconds: int = 7200 as a parameter with clear documentation, but line 100 calls headers_cache.store_headers(execution_id, headers) without passing it. The parameter either needs to be removed if unused, or forwarded to the cache method if the TTL is meant to be applied.

Suggested fix — pass ttl_seconds to store_headers
-            return headers_cache.store_headers(execution_id, headers)
+            return headers_cache.store_headers(execution_id, headers, ttl_seconds=ttl_seconds)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py` around
lines 78 - 103, The cache_api_hub_headers function declares ttl_seconds but
never forwards it; update the call to headers_cache.store_headers to pass
ttl_seconds (e.g., headers_cache.store_headers(execution_id, headers,
ttl_seconds)) so the TTL is applied, and confirm the headers_cache class/method
signature supports the extra parameter (or if the cache API cannot accept a TTL,
remove the ttl_seconds parameter and its docstring). Ensure the change is
applied in cache_api_hub_headers and any affected headers_cache implementation.
🧹 Nitpick comments (11)
frontend/src/components/log-in/Login.jsx (2)

22-34: DRY: handleLogin and handleSignup share identical URL-construction logic.

Both handlers differ only in the path segment. A small helper avoids future drift between the two.

♻️ Proposed refactor
+  const buildAuthUrl = (path) =>
+    isValidProduct
+      ? `${baseUrl}${path}?selectedProduct=${encodeURIComponent(selectedProduct)}`
+      : `${baseUrl}${path}`;
+
   const handleLogin = () => {
-    const loginUrl = isValidProduct
-      ? `${baseUrl}/api/v1/login?selectedProduct=${selectedProduct}`
-      : `${baseUrl}/api/v1/login`;
-    window.location.href = loginUrl;
+    window.location.href = buildAuthUrl("/api/v1/login");
   };

   const handleSignup = () => {
-    const signupUrl = isValidProduct
-      ? `${baseUrl}/api/v1/signup?selectedProduct=${selectedProduct}`
-      : `${baseUrl}/api/v1/signup`;
-    window.location.href = signupUrl;
+    window.location.href = buildAuthUrl("/api/v1/signup");
   };

encodeURIComponent is included above as a best practice even though the current whitelist values are URL-safe.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/log-in/Login.jsx` around lines 22 - 34, handleLogin
and handleSignup duplicate URL-construction logic; extract a small helper like
buildAuthUrl(path) that accepts the path segment ("login" or "signup") and
returns `${baseUrl}/api/v1/${path}` optionally appending
`?selectedProduct=${encodeURIComponent(selectedProduct)}` when isValidProduct is
true; then call buildAuthUrl("login") in handleLogin and buildAuthUrl("signup")
in handleSignup to remove duplication and ensure selectedProduct is URL-encoded.

18-20: Consider extracting the valid-products list to a named constant.

The inline ["unstract", "llm-whisperer"] array makes future additions easy to miss and is invisible to consumers of this module.

♻️ Proposed refactor
+const VALID_PRODUCTS = ["unstract", "llm-whisperer"];
+
 function Login() {
   const baseUrl = getBaseUrl();
   const selectedProduct = localStorage.getItem("selectedProduct");
   const isValidProduct =
-    selectedProduct && ["unstract", "llm-whisperer"].includes(selectedProduct);
+    selectedProduct && VALID_PRODUCTS.includes(selectedProduct);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/log-in/Login.jsx` around lines 18 - 20, Extract the
inline product list into a named constant (e.g., VALID_PRODUCTS) and use it in
the isValidProduct check; replace ["unstract", "llm-whisperer"] with
VALID_PRODUCTS in the Login.jsx where selectedProduct and isValidProduct are
defined so the list is centralized and easily discoverable/extendable
(optionally export VALID_PRODUCTS if other modules will consume it).
backend/workflow_manager/file_execution/internal_views.py (2)

150-163: Redundant double-fetch of WorkflowExecution; the unfiltered object is the one actually used.

Line 152 fetches without organization filtering, then lines 154-158 perform a second org-filtered query whose result is discarded. The workflow_execution passed to the manager at line 185 is the unfiltered one. While the authorization gate at line 158 does prevent unauthorized access (raises DoesNotExist → 404), the pattern is confusing and costs an extra query.

Collapse to a single org-filtered fetch:

♻️ Suggested simplification
             # Get workflow execution with organization filtering
             try:
-                workflow_execution = WorkflowExecution.objects.get(id=execution_id)
-                # Verify organization access
-                filter_queryset_by_organization(
+                workflow_execution = filter_queryset_by_organization(
                     WorkflowExecution.objects.filter(id=execution_id),
                     request,
                     "workflow__organization",
                 ).get()
             except WorkflowExecution.DoesNotExist:

The same double-fetch pattern is duplicated in FileExecutionBatchCreateAPIView.post() (lines 303-322).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/workflow_manager/file_execution/internal_views.py` around lines 150 -
163, Replace the redundant double-fetch of WorkflowExecution by performing a
single org-filtered get: use
filter_queryset_by_organization(WorkflowExecution.objects.filter(id=execution_id),
request, "workflow__organization").get() and assign that result to
workflow_execution (instead of first calling
WorkflowExecution.objects.get(id=execution_id)); apply the same change in
FileExecutionBatchCreateAPIView.post where the identical pattern appears so both
places use the org-filtered query result as the workflow_execution used later.

264-265: Consider extracting the shared FileHash construction and org-filtered execution lookup into helpers.

The create() method and FileExecutionBatchCreateAPIView.post() contain nearly identical blocks: WorkflowExecution org-filtered fetch (lines 151-163 vs 303-322), FileHash construction (lines 166-178 vs 325-339), is_api determination, and file_path fallback logic. Extracting these into shared helper functions would reduce the ~50 lines of duplication and make future changes less error-prone.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/workflow_manager/file_execution/internal_views.py` around lines 264 -
265, The create() method and FileExecutionBatchCreateAPIView.post() duplicate
org-filtered WorkflowExecution lookup and FileHash construction (plus is_api
determination and file_path fallback); refactor by extracting two helpers: one
like get_org_filtered_workflow_execution(workflow_execution_id, organization)
that encapsulates the WorkflowExecution query/filtering logic used in both
create() and FileExecutionBatchCreateAPIView.post(), and another like
build_file_hash_from_payload(payload, file_store_id, file_name, file_size) that
builds and returns the FileHash object (including is_api/file_path fallback
logic). Replace the duplicated blocks in create() and
FileExecutionBatchCreateAPIView.post() to call these helpers to remove the
~50-line duplication and ensure identical behavior in both places.
backend/prompt_studio/prompt_studio_core_v2/views.py (1)

129-156: Extract common HubSpot notification logic to reduce duplication.

Four nearly identical _notify_hubspot_* methods differ only in the HubSpotEvent variant and the is_first_for_org value. Consider extracting a single shared helper:

♻️ Proposed refactor — single notification helper
def _notify_hubspot(self, user, event_name: str, is_first_for_org: bool) -> None:
    """Send a HubSpot event notification if the plugin is available."""
    hubspot_plugin = get_plugin("hubspot")
    if not hubspot_plugin:
        return
    try:
        from plugins.integrations.hubspot import HubSpotEvent

        service = hubspot_plugin["service_class"]()
        service.update_contact(
            user=user,
            events=[getattr(HubSpotEvent, event_name)],
            is_first_for_org=is_first_for_org,
        )
    except Exception as e:
        logger.warning(f"Failed to notify HubSpot for {event_name}: {e}")

Then each call site becomes a one-liner like:

self._notify_hubspot(request.user, "PROMPT_RUN", output_count_before == 0)

This also applies to the identical method in backend/api_v2/api_deployment_views.py — consider placing the helper in a shared mixin or utility module.

Also applies to: 465-490, 667-692, 761-786

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 129 - 156,
Extract the duplicated HubSpot notification logic into a single helper (e.g.,
_notify_hubspot) and replace the four near-identical methods like
_notify_hubspot_first_project with calls to that helper; the helper should call
get_plugin("hubspot"), import HubSpotEvent, instantiate
hubspot_plugin["service_class"](), and call service.update_contact(user=user,
events=[getattr(HubSpotEvent, event_name)], is_first_for_org=is_first_for_org)
inside a try/except that logs failures (matching the existing logger.warning
behavior). Locate and replace usages in prompt_studio_core_v2.views (methods:
_notify_hubspot_first_project and the other three similar methods) and the
duplicate in backend/api_v2/api_deployment_views.py, and consider placing the
helper on a shared mixin or utility module so both modules can call
self._notify_hubspot(request.user, "EVENT_NAME", is_first_flag).
backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py (1)

37-55: Use logger.exception instead of logger.error for exception handlers.

In all three exception handlers (lines 52, 75, 102), logger.error is used with an f-string that includes only the exception message. Using logger.exception would automatically include the full traceback, which is valuable for diagnosing plugin failures in production.

♻️ Proposed fix
         except Exception as e:
-            logger.error(
+            logger.exception(
                 f"Failed to track API hub usage for execution {workflow_execution_id}: {e}"
             )

Apply similarly to the handlers on lines 74–76 and 101–103.

Also applies to: 67-76, 94-103

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py` around
lines 37 - 55, The except blocks around the API Hub usage tracking should call
logger.exception instead of logger.error so the traceback is captured; locate
the try/except that instantiates headers_cache and usage_tracker from
verticals_usage_plugin and replace the logger.error(...) calls in those except
handlers with logger.exception(...) (e.g. logger.exception(f"Failed to track API
hub usage for execution {workflow_execution_id}")) for each of the three
handlers that currently log the exception message, leaving the rest of the logic
(return False) intact.
docker/README.md (1)

107-113: Consider noting that V2 worker debugging requires the --profile workers-v2 flag.

The debug command on Line 112 doesn't include --profile workers-v2. Developers wanting to debug V2 workers might miss this. A brief note would save confusion.

Suggested addition
 ```bash
 VERSION=dev docker compose -f docker-compose.yaml -f compose.override.yaml -f compose.debug.yaml watch

+> Note: To debug V2 workers, add --profile workers-v2 to the command above.


</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @docker/README.md around lines 107 - 113, Update the "Debugging Containers"
section where the docker compose debug command block (the VERSION=dev docker
compose -f docker-compose.yaml -f compose.override.yaml -f compose.debug.yaml
watch command) is shown to include a brief note that V2 worker debugging
requires the --profile workers-v2 flag; add one sentence after the command
indicating developers should append --profile workers-v2 when debugging V2
workers so they don't miss that requirement.


</details>

</blockquote></details>
<details>
<summary>docker/sample.compose.override.yaml (1)</summary><blockquote>

`197-320`: **V2 worker watch configs are heavily duplicated — consider a future DRY-up.**

All seven V2 workers share identical `develop.watch` blocks. This is a known limitation of Docker Compose (no YAML anchors across services in some tooling). Not blocking, but worth a `# TODO` or an `x-` extension field if compose tooling supports it.



<details>
<summary>Example using YAML extension fields</summary>

```yaml
x-worker-watch: &worker-watch
  develop:
    watch:
      - action: sync+restart
        path: ../workers/
        target: /app
        ignore: [.venv/, __pycache__/, "*.pyc", .pytest_cache/, .mypy_cache/]
      - action: sync+restart
        path: ../unstract/
        target: /unstract
        ignore: [.venv/, __pycache__/, "*.pyc", .pytest_cache/, .mypy_cache/]
      - action: rebuild
        path: ../workers/uv.lock

# Then in each worker:
worker-api-deployment-v2:
  <<: *worker-watch
  build:
    dockerfile: docker/dockerfiles/worker-unified.Dockerfile
    context: ..
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/sample.compose.override.yaml` around lines 197 - 320, The
develop.watch blocks for all V2 services (worker-api-deployment-v2,
worker-callback-v2, worker-file-processing-v2, worker-general-v2,
worker-notification-v2, worker-log-consumer-v2, worker-scheduler-v2) are
duplicated; factor them out by adding a YAML extension field (e.g.,
x-worker-watch) that contains the develop.watch array and then merge it into
each service (use the YAML merge key reference like <<: *worker-watch) to
eliminate repetition, and add a short "# TODO: DRY: move common watch config to
x-worker-watch" comment above the extension for future maintainers.
docker/compose.debug.yaml (1)

87-103: user: root on debug workers — acceptable for debug-only usage.

Running as root is fine in a local debug context but worth a comment to prevent this pattern from leaking into production-targeted compose files. The existing header comment on Line 85 about debugpy compatibility is good; the user: root rationale could be briefly noted too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/compose.debug.yaml` around lines 87 - 103, Add a short explanatory
comment above the worker-file-processing-v2 service that documents why user:
root is set for debug-only usage (e.g., to allow debugpy to bind privileged
sockets) so reviewers know this is intentional and should not be copied into
production compose files; reference the service name worker-file-processing-v2
and the user: root property and keep the comment next to the existing debugpy
header comment for clarity.
docker/dockerfiles/frontend.Dockerfile (1)

23-25: Redundant COPY of generate-runtime-config.sh in development stage.

Line 21 already copies the entire ${BUILD_CONTEXT_PATH}/ into /app/, which would include generate-runtime-config.sh. The explicit COPY on Line 24 is redundant. That said, it does make the dependency explicit and won't cause issues — purely a nit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/dockerfiles/frontend.Dockerfile` around lines 23 - 25, The Dockerfile
currently copies ${BUILD_CONTEXT_PATH}/ into /app/ earlier and then redundantly
re-COPYs generate-runtime-config.sh; remove the explicit "COPY
${BUILD_CONTEXT_PATH}/generate-runtime-config.sh
/app/generate-runtime-config.sh" line to avoid duplication and keep the "RUN
chmod +x /app/generate-runtime-config.sh" (or alternatively remove both if
executable permission is already correct) so that generate-runtime-config.sh is
present from the initial COPY while still ensuring it is executable.
workers/shared/api/internal_client.py (1)

436-436: Prefix unused lambda parameters with _ to silence Ruff ARG005.

self and organization_id are declared but never referenced in all three new key-extractor lambdas (lines 436, 492, 502). The same pattern exists in the unchanged decorators at lines 451, 520, and 560.

♻️ Proposed fix — prefix unused lambda parameters with `_`
-    `@with_cache`(
-        CacheType.WORKFLOW,
-        lambda self, workflow_id, organization_id=None: str(workflow_id),
-    )
+    `@with_cache`(
+        CacheType.WORKFLOW,
+        lambda _self, workflow_id, _organization_id=None: str(workflow_id),
+    )
-    `@with_cache`(
-        CacheType.PIPELINE,
-        lambda self, pipeline_id, organization_id=None: str(pipeline_id),
-    )
+    `@with_cache`(
+        CacheType.PIPELINE,
+        lambda _self, pipeline_id, _organization_id=None: str(pipeline_id),
+    )
-    `@with_cache`(
-        CacheType.PIPELINE_DATA,
-        lambda self, pipeline_id, check_active=True, organization_id=None: (
-            f"{pipeline_id}:{check_active}"
-        ),
-    )
+    `@with_cache`(
+        CacheType.PIPELINE_DATA,
+        lambda _self, pipeline_id, check_active=True, _organization_id=None: (
+            f"{pipeline_id}:{check_active}"
+        ),
+    )

The same fix should also be applied to the pre-existing lambdas at lines 451, 520, and 560 to make the file consistent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workers/shared/api/internal_client.py` at line 436, Prefix unused lambda
parameters with underscores to silence Ruff ARG005: replace lambdas like "lambda
self, workflow_id, organization_id=None: str(workflow_id)" with "lambda _self,
workflow_id, _organization_id=None: str(workflow_id)" and apply the same pattern
to the other two new key-extractor lambdas and the three existing decorator
lambdas that declare self and organization_id but don't use them (use _self and
_organization_id for the unused parameters).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/plugins/workflow_manager/workflow_v2/utils.py`:
- Around line 26-40: The _mrq_files function can raise ValueError for n == 0 or
percentage > 100; fix it by early-returning an empty set when n <= 0, clamping
percentage into [0,100], and ensuring num_to_select is bounded by n (e.g.,
num_to_select = max(1, min(n, int(n * (percentage / 100))))) before calling
random.sample on range(1, n + 1); this change in _mrq_files will prevent the
error seen from get_q_no_list when total_files == 0.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 143-144: The count() calls for PromptStudioOutputManager and
DocumentManager are unscoped and may return totals across all organizations;
update their queries to be organization-scoped by either adding
DefaultOrganizationManagerMixin to a custom default manager on the
PromptStudioOutputManager and DocumentManager models (so their .objects enforces
organization filtering like CustomTool and PromptStudioRegistry) or change the
count calls to explicitly filter by organization, e.g.
.filter(organization=UserContext.get_organization()).count(), ensuring you
modify the references to PromptStudioOutputManager and DocumentManager (or their
default .objects) accordingly.

In `@backend/workflow_manager/endpoint_v2/destination.py`:
- Around line 1044-1058: The TTL propagation currently gets ttl_seconds via
WorkflowUtil.get_hitl_ttl_seconds and builds queue_result, but
_enqueue_to_packet_or_regular_queue uses a truthy check so ttl_seconds==0 is
treated as missing; update _enqueue_to_packet_or_regular_queue (and any internal
enqueue branch that checks ttl_seconds) to explicitly check for None (e.g., if
ttl_seconds is not None) instead of truthiness so a value of 0 is honored and
passed through when calling the underlying enqueue or packet enqueue logic;
ensure any place that sets TTL on the enqueued message uses the provided
ttl_seconds value even when it is 0.

In `@docker/compose.debug.yaml`:
- Around line 23-35: The debug compose services backend, runner,
platform-service and prompt-service rely on compose.override.yaml for an
entrypoint and so pass the long command string to the image's default
ENTRYPOINT; add entrypoint: ["bash", "-c"] to each of these service blocks in
docker/compose.debug.yaml (for the services named backend, runner,
platform-service, prompt-service) so the debug command is executed by a shell
and the file becomes self-contained and safe to stack directly on
docker-compose.yaml.

In `@docker/dockerfiles/frontend.Dockerfile`:
- Around line 55-56: The Dockerfile's base image line uses an outdated nginx
tag; update the FROM instruction in frontend.Dockerfile (the line starting with
"FROM nginx:1.29.1-alpine") to a maintained tag such as "nginx:1.29.5-alpine" or
"nginx:mainline-alpine" so the production stage uses a current mainline release;
keep the ARG BUILD_CONTEXT_PATH line as-is and ensure no other dependent
references assume the old tag.

In `@docker/sample.compose.override.yaml`:
- Around line 37-38: The Traefik service label currently points to port 3000 but
the frontend dev image now exposes and listens on port 80 (see
frontend.Dockerfile ENV PORT and EXPOSE 80), so update the Traefik label in the
compose override (the labels block containing
traefik.http.services.frontend.loadbalancer.server.port) to use 80 instead of
3000 so Traefik routes to the container's actual listening port.

In `@workers/shared/api/internal_client.py`:
- Around line 490-498: get_pipeline_type is currently decorated with with_cache
and returns whatever execution_client.get_pipeline_type() returns, but that call
returns success_response() even for fallback/error cases (marked by
"source":"fallback"), causing fallbacks to be cached. Modify
InternalAPIClient.get_pipeline_type (the get_pipeline_type method shown) to
inspect the returned APIResponse from
execution_client.get_pipeline_type(pipeline_id, organization_id) and, if the
response payload contains "source":"fallback" (or otherwise indicates a
fallback), set response.success = False (or convert it to a failure response)
before returning so the with_cache / cache_decorator heuristic will not cache
fallback results; alternatively you may adjust
execution_client.get_pipeline_type to return success=False for fallbacks, but
ensure the change is applied to either execution_client.get_pipeline_type or
InternalAPIClient.get_pipeline_type so fallback responses are not treated as
cacheable.

In `@workers/shared/clients/base_client.py`:
- Line 143: The comment "to avoid retry amplification (urllib3 retries ×
app-level retries)." contains a Unicode multiplication sign that triggers Ruff
RUF003; replace the "×" with an ASCII "x" (i.e., change to "to avoid retry
amplification (urllib3 retries x app-level retries).") so the text remains
identical in meaning but uses plain ASCII; update this in the same comment in
base_client.py.

---

Outside diff comments:
In `@backend/account_v2/constants.py`:
- Around line 71-95: The authentication error check in
authentication_controller.py omits AuthorizationErrorCode.INE003 so responses
for that invalid-email-forbidden case don't include the code and domain data;
update the conditional that currently contains AuthorizationErrorCode.INE001 and
AuthorizationErrorCode.INE002 to also include AuthorizationErrorCode.INE003
(i.e., add AuthorizationErrorCode.INE003 to the set checked by the if that
returns the error response) so the handler will return the error code and domain
details when INE003 is raised.

In `@backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py`:
- Around line 78-103: The cache_api_hub_headers function declares ttl_seconds
but never forwards it; update the call to headers_cache.store_headers to pass
ttl_seconds (e.g., headers_cache.store_headers(execution_id, headers,
ttl_seconds)) so the TTL is applied, and confirm the headers_cache class/method
signature supports the extra parameter (or if the cache API cannot accept a TTL,
remove the ttl_seconds parameter and its docstring). Ensure the change is
applied in cache_api_hub_headers and any affected headers_cache implementation.

In `@backend/workflow_manager/file_execution/internal_views.py`:
- Around line 596-693: The local variable status in the get method shadows
rest_framework.status used in the except block; rename the query param variable
(e.g., status_param or status_filter) wherever you read it from
request.query_params.get("status") and wherever you filter the queryset
(currently using status in file_executions.filter(status=status)) so the except
block can still reference status.HTTP_500_INTERNAL_SERVER_ERROR; update all
references inside the get method (filtering, filters_applied dict, and any
logging) to the new name.
- Around line 468-471: The except block can raise a NameError because
update_data may not be a dict and file_execution_id may never be assigned; fix
by defensively extracting file_execution_id before the try (e.g., set
file_execution_id = update_data.get("file_execution_id") if
isinstance(update_data, dict) else None) and validate update_data is a dict at
the top of the loop (raise or record a typed error if not) so
update_data.get(...) can't raise AttributeError; apply the same defensive
extraction/validation in the status_updates loop in internal_views.py and in
FileExecutionBatchHashUpdateAPIView to ensure failed_updates.append(...) always
has a defined file_execution_id and the original exception is preserved.
- Around line 281-376: The loop currently wraps all items in one
transaction.atomic() so any per-item exception marks the whole transaction
unusable; change to use a per-item savepoint by moving the transaction.atomic()
inside the loop (i.e., for file_execution_data in file_executions: with
transaction.atomic(): try: ... except Exception as e: ...), so each iteration
gets its own atomic/savepoint and failures don’t break subsequent iterations;
apply the same change to the other batch handlers
FileExecutionBatchStatusUpdateAPIView and FileExecutionBatchHashUpdateAPIView
and ensure error handling still appends to failed_creations while continuing the
loop (references: WorkflowFileExecution.objects.get_or_create_file_execution,
file_execution/file_hash variables, and the three mentioned view classes).

---

Nitpick comments:
In `@backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py`:
- Around line 37-55: The except blocks around the API Hub usage tracking should
call logger.exception instead of logger.error so the traceback is captured;
locate the try/except that instantiates headers_cache and usage_tracker from
verticals_usage_plugin and replace the logger.error(...) calls in those except
handlers with logger.exception(...) (e.g. logger.exception(f"Failed to track API
hub usage for execution {workflow_execution_id}")) for each of the three
handlers that currently log the exception message, leaving the rest of the logic
(return False) intact.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 129-156: Extract the duplicated HubSpot notification logic into a
single helper (e.g., _notify_hubspot) and replace the four near-identical
methods like _notify_hubspot_first_project with calls to that helper; the helper
should call get_plugin("hubspot"), import HubSpotEvent, instantiate
hubspot_plugin["service_class"](), and call service.update_contact(user=user,
events=[getattr(HubSpotEvent, event_name)], is_first_for_org=is_first_for_org)
inside a try/except that logs failures (matching the existing logger.warning
behavior). Locate and replace usages in prompt_studio_core_v2.views (methods:
_notify_hubspot_first_project and the other three similar methods) and the
duplicate in backend/api_v2/api_deployment_views.py, and consider placing the
helper on a shared mixin or utility module so both modules can call
self._notify_hubspot(request.user, "EVENT_NAME", is_first_flag).

In `@backend/workflow_manager/file_execution/internal_views.py`:
- Around line 150-163: Replace the redundant double-fetch of WorkflowExecution
by performing a single org-filtered get: use
filter_queryset_by_organization(WorkflowExecution.objects.filter(id=execution_id),
request, "workflow__organization").get() and assign that result to
workflow_execution (instead of first calling
WorkflowExecution.objects.get(id=execution_id)); apply the same change in
FileExecutionBatchCreateAPIView.post where the identical pattern appears so both
places use the org-filtered query result as the workflow_execution used later.
- Around line 264-265: The create() method and
FileExecutionBatchCreateAPIView.post() duplicate org-filtered WorkflowExecution
lookup and FileHash construction (plus is_api determination and file_path
fallback); refactor by extracting two helpers: one like
get_org_filtered_workflow_execution(workflow_execution_id, organization) that
encapsulates the WorkflowExecution query/filtering logic used in both create()
and FileExecutionBatchCreateAPIView.post(), and another like
build_file_hash_from_payload(payload, file_store_id, file_name, file_size) that
builds and returns the FileHash object (including is_api/file_path fallback
logic). Replace the duplicated blocks in create() and
FileExecutionBatchCreateAPIView.post() to call these helpers to remove the
~50-line duplication and ensure identical behavior in both places.

In `@docker/compose.debug.yaml`:
- Around line 87-103: Add a short explanatory comment above the
worker-file-processing-v2 service that documents why user: root is set for
debug-only usage (e.g., to allow debugpy to bind privileged sockets) so
reviewers know this is intentional and should not be copied into production
compose files; reference the service name worker-file-processing-v2 and the
user: root property and keep the comment next to the existing debugpy header
comment for clarity.

In `@docker/dockerfiles/frontend.Dockerfile`:
- Around line 23-25: The Dockerfile currently copies ${BUILD_CONTEXT_PATH}/ into
/app/ earlier and then redundantly re-COPYs generate-runtime-config.sh; remove
the explicit "COPY ${BUILD_CONTEXT_PATH}/generate-runtime-config.sh
/app/generate-runtime-config.sh" line to avoid duplication and keep the "RUN
chmod +x /app/generate-runtime-config.sh" (or alternatively remove both if
executable permission is already correct) so that generate-runtime-config.sh is
present from the initial COPY while still ensuring it is executable.

In `@docker/README.md`:
- Around line 107-113: Update the "Debugging Containers" section where the
docker compose debug command block (the VERSION=dev docker compose -f
docker-compose.yaml -f compose.override.yaml -f compose.debug.yaml watch
command) is shown to include a brief note that V2 worker debugging requires the
--profile workers-v2 flag; add one sentence after the command indicating
developers should append --profile workers-v2 when debugging V2 workers so they
don't miss that requirement.

In `@docker/sample.compose.override.yaml`:
- Around line 197-320: The develop.watch blocks for all V2 services
(worker-api-deployment-v2, worker-callback-v2, worker-file-processing-v2,
worker-general-v2, worker-notification-v2, worker-log-consumer-v2,
worker-scheduler-v2) are duplicated; factor them out by adding a YAML extension
field (e.g., x-worker-watch) that contains the develop.watch array and then
merge it into each service (use the YAML merge key reference like <<:
*worker-watch) to eliminate repetition, and add a short "# TODO: DRY: move
common watch config to x-worker-watch" comment above the extension for future
maintainers.

In `@frontend/src/components/log-in/Login.jsx`:
- Around line 22-34: handleLogin and handleSignup duplicate URL-construction
logic; extract a small helper like buildAuthUrl(path) that accepts the path
segment ("login" or "signup") and returns `${baseUrl}/api/v1/${path}` optionally
appending `?selectedProduct=${encodeURIComponent(selectedProduct)}` when
isValidProduct is true; then call buildAuthUrl("login") in handleLogin and
buildAuthUrl("signup") in handleSignup to remove duplication and ensure
selectedProduct is URL-encoded.
- Around line 18-20: Extract the inline product list into a named constant
(e.g., VALID_PRODUCTS) and use it in the isValidProduct check; replace
["unstract", "llm-whisperer"] with VALID_PRODUCTS in the Login.jsx where
selectedProduct and isValidProduct are defined so the list is centralized and
easily discoverable/extendable (optionally export VALID_PRODUCTS if other
modules will consume it).

In `@workers/shared/api/internal_client.py`:
- Line 436: Prefix unused lambda parameters with underscores to silence Ruff
ARG005: replace lambdas like "lambda self, workflow_id, organization_id=None:
str(workflow_id)" with "lambda _self, workflow_id, _organization_id=None:
str(workflow_id)" and apply the same pattern to the other two new key-extractor
lambdas and the three existing decorator lambdas that declare self and
organization_id but don't use them (use _self and _organization_id for the
unused parameters).

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 9bde93f and 08736d6.

📒 Files selected for processing (22)
  • backend/account_v2/authentication_service.py
  • backend/account_v2/constants.py
  • backend/api_v2/api_deployment_views.py
  • backend/migrating/v2/unstract_migrations.py
  • backend/plugins/workflow_manager/workflow_v2/api_hub_usage_utils.py
  • backend/plugins/workflow_manager/workflow_v2/utils.py
  • backend/prompt_studio/prompt_studio_core_v2/views.py
  • backend/workflow_manager/endpoint_v2/destination.py
  • backend/workflow_manager/file_execution/internal_views.py
  • docker/README.md
  • docker/compose.debug.yaml
  • docker/docker-compose.yaml
  • docker/dockerfiles/backend.Dockerfile
  • docker/dockerfiles/frontend.Dockerfile
  • docker/sample.compose.override.yaml
  • frontend/src/components/error/GenericError/GenericError.jsx
  • frontend/src/components/log-in/Login.jsx
  • run-platform.sh
  • workers/shared/api/internal_client.py
  • workers/shared/cache/cache_decorator.py
  • workers/shared/cache/cache_keys.py
  • workers/shared/clients/base_client.py
💤 Files with no reviewable changes (1)
  • backend/account_v2/authentication_service.py

Comment thread backend/plugins/workflow_manager/workflow_v2/utils.py
Comment thread backend/prompt_studio/prompt_studio_core_v2/views.py Outdated
Comment thread backend/workflow_manager/endpoint_v2/destination.py
Comment thread docker/compose.debug.yaml
Comment thread docker/dockerfiles/frontend.Dockerfile
Comment thread docker/sample.compose.override.yaml
Comment thread workers/shared/api/internal_client.py
Comment thread workers/shared/clients/base_client.py
@hari-kuriakose hari-kuriakose self-assigned this Feb 24, 2026
@hari-kuriakose hari-kuriakose changed the title feat: Plugin integrations and enterprise component enhancements feat: Backend plugin for integrations Feb 24, 2026
@hari-kuriakose hari-kuriakose changed the title feat: Backend plugin for integrations feat: Add HubSpot integration plugin hooks for user journey tracking Feb 25, 2026
Copy link
Copy Markdown
Contributor

@tahierhussain tahierhussain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Copy Markdown
Contributor

@chandrasekharan-zipstack chandrasekharan-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: Hubspot related code could be present as static functions in a separate util to keep the current file clean

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
backend/prompt_studio/prompt_studio_core_v2/views.py (1)

448-450: ⚠️ Potential issue | 🟠 Major

Scope output/document counters to organization before deriving is_first_for_org.

Line 449 and Line 648 use global .objects.count(). This breaks org-level milestone detection in multi-tenant setups (false negatives/positives across orgs), especially since PromptStudioOutputManager/DocumentManager are not shown with org-scoped default managers.

Suggested fix pattern
- output_count_before = PromptStudioOutputManager.objects.count()
+ output_count_before = PromptStudioOutputManager.objects.filter(
+     # scope via org relation (example path; align with actual model fields)
+     tool_id__organization=UserContext.get_organization()
+ ).count()

- doc_count_before = DocumentManager.objects.count()
+ doc_count_before = DocumentManager.objects.filter(
+     # scope via org relation (example path; align with actual model fields)
+     tool__organization=UserContext.get_organization()
+ ).count()

Also applies to: 647-649

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 448 - 450,
The global counts for PromptStudioOutputManager and DocumentManager must be
scoped to the current organization before deriving is_first_for_org; replace the
.objects.count() calls with organization-scoped queries (e.g.,
PromptStudioOutputManager.objects.filter(organization=org).count() and
DocumentManager.objects.filter(organization=org).count() or use the project/org
identifier available in the view) so both the pre-run output_count_before and
the later document/output checks only consider records for the current org;
update the code paths that compute is_first_for_org and any related checks
(references: PromptStudioOutputManager, DocumentManager, is_first_for_org) to
use the org-filtered counts.
🧹 Nitpick comments (2)
backend/api_v2/api_deployment_views.py (1)

342-344: Preserve traceback for HubSpot failures.

Line 344 logs only the exception message. Prefer logger.exception(...) (or exc_info=True) so integration failures are debuggable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/api_v2/api_deployment_views.py` around lines 342 - 344, The catch
block that currently calls logger.warning when notifying HubSpot fails should
preserve the traceback; replace the logger.warning call in the except Exception
as e block (the code that logs "Failed to notify HubSpot for API deployment")
with logger.exception(...) or logger.warning(..., exc_info=True) so the
exception traceback is recorded for debugging (i.e., update the logging call
that follows the except Exception as e to include exc_info or use
logger.exception).
backend/prompt_studio/prompt_studio_core_v2/views.py (1)

155-157: Use traceback logging for HubSpot hook failures.

Current warning logs drop stack traces, which makes production troubleshooting harder. Prefer logger.exception(...) for these catch blocks.

Also applies to: 489-491, 729-731, 823-825

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 155 - 157,
Replace the bare logger.warning calls inside the HubSpot notification exception
handlers with logger.exception so the full stacktrace is recorded; specifically,
in the except blocks that currently do logger.warning(f"Failed to notify HubSpot
for project creation: {e}") and the similar warning calls later in the file (the
HubSpot hook catch blocks around the other occurrences), change them to
logger.exception("Failed to notify HubSpot for project creation", exc_info=True)
or simply logger.exception("Failed to notify HubSpot for project creation") to
ensure traceback is logged when functions handling HubSpot notifications (the
try/except blocks reporting "Failed to notify HubSpot...") catch exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/api_v2/api_deployment_views.py`:
- Around line 296-298: The pre-create APIDeployment.objects.count() check is
race-prone; instead determine is_first_for_org inside a transaction by locking
the related Organization row (e.g., using transaction.atomic() +
Organization.objects.select_for_update().get(pk=org.pk)) and then check
APIDeployment.objects.filter(organization=org).exists() immediately before
create, or enforce a DB uniqueness sentinel and perform the create/catch
IntegrityError pattern; update the code paths that set is_first_for_org (the
logic around is_first_for_org and the pre-create count) to use this
transactional/locking approach so concurrent requests cannot both detect “first
deploy.”

In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 125-127: The current "first-*" checks (e.g., in
_notify_hubspot_first_project and the other first-event callers referenced) rely
on plain count checks and can race; change them to atomic database-backed
guards: create or use a per-organization sentinel (boolean column or dedicated
FirstEvent table with a unique org_id + event_type constraint) and perform the
check-and-set inside a DB transaction (or use get_or_create/select_for_update)
so only one request can succeed in marking the first-event; then only emit the
HubSpot/first-event notification when the transaction successfully creates/sets
the sentinel. Ensure you update the code paths that call
_notify_hubspot_first_project (and the similar first-event checks at the other
locations) to use this transactional sentinel approach.

---

Duplicate comments:
In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 448-450: The global counts for PromptStudioOutputManager and
DocumentManager must be scoped to the current organization before deriving
is_first_for_org; replace the .objects.count() calls with organization-scoped
queries (e.g.,
PromptStudioOutputManager.objects.filter(organization=org).count() and
DocumentManager.objects.filter(organization=org).count() or use the project/org
identifier available in the view) so both the pre-run output_count_before and
the later document/output checks only consider records for the current org;
update the code paths that compute is_first_for_org and any related checks
(references: PromptStudioOutputManager, DocumentManager, is_first_for_org) to
use the org-filtered counts.

---

Nitpick comments:
In `@backend/api_v2/api_deployment_views.py`:
- Around line 342-344: The catch block that currently calls logger.warning when
notifying HubSpot fails should preserve the traceback; replace the
logger.warning call in the except Exception as e block (the code that logs
"Failed to notify HubSpot for API deployment") with logger.exception(...) or
logger.warning(..., exc_info=True) so the exception traceback is recorded for
debugging (i.e., update the logging call that follows the except Exception as e
to include exc_info or use logger.exception).

In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 155-157: Replace the bare logger.warning calls inside the HubSpot
notification exception handlers with logger.exception so the full stacktrace is
recorded; specifically, in the except blocks that currently do
logger.warning(f"Failed to notify HubSpot for project creation: {e}") and the
similar warning calls later in the file (the HubSpot hook catch blocks around
the other occurrences), change them to logger.exception("Failed to notify
HubSpot for project creation", exc_info=True) or simply logger.exception("Failed
to notify HubSpot for project creation") to ensure traceback is logged when
functions handling HubSpot notifications (the try/except blocks reporting
"Failed to notify HubSpot...") catch exceptions.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 08736d6 and 88b5488.

📒 Files selected for processing (2)
  • backend/api_v2/api_deployment_views.py
  • backend/prompt_studio/prompt_studio_core_v2/views.py

Comment thread backend/api_v2/api_deployment_views.py
Comment thread backend/prompt_studio/prompt_studio_core_v2/views.py
PromptStudioOutputManager and DocumentManager lack
DefaultOrganizationManagerMixin, so .objects.count() was counting
across ALL organizations. Filter through the tool FK to CustomTool
(which is org-scoped) to get correct per-org counts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
backend/prompt_studio/prompt_studio_core_v2/views.py (1)

144-145: ⚠️ Potential issue | 🟠 Major

First-event milestone checks are still race-prone.

Line 451/654/791 compute pre-counts and Line 483/727/821 derive is_first_for_org from them; concurrent requests can still emit duplicate “first” events. Please switch to an atomic sentinel (unique (organization, event_type) + get_or_create in a transaction) and notify only when creation succeeds.

Also applies to: 451-454, 483-484, 654-657, 727-728, 791-792, 821-822

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 144 - 145,
Replace the race-prone pre-count pattern (e.g., the org_project_count =
CustomTool.objects.count() / is_first_for_org = org_project_count == 1 logic)
with an atomic sentinel record lookup/creation: introduce or reuse a unique
sentinel model keyed by (organization, event_type) and inside a transaction use
get_or_create (or select_for_update + create) to attempt creation for the
specific organization and event type; only emit the “first” notification when
get_or_create returns created=True (i.e., creation succeeded). Update each spot
that computes is_first_for_org (references to CustomTool counting and the
is_first_for_org variable) to instead call a helper like
ensure_first_event_sentinel(organization, event_type) that performs the
transactional get_or_create and returns whether it was created.
🧹 Nitpick comments (1)
backend/prompt_studio/prompt_studio_core_v2/views.py (1)

451-453: Use exists() instead of count() for zero/non-zero checks.

Line 451-453, Line 654-656, and Line 791-792 only need presence checks. exists() avoids full counts and is cheaper on large tables.

Suggested refactor
- output_count_before = PromptStudioOutputManager.objects.filter(
+ output_exists_before = PromptStudioOutputManager.objects.filter(
     tool_id__in=CustomTool.objects.values_list("tool_id", flat=True)
- ).count()
+ ).exists()

- self._notify_hubspot_first_prompt_run(request.user, output_count_before)
+ self._notify_hubspot_first_prompt_run(request.user, output_exists_before)
- is_first_for_org = output_count_before == 0
+ is_first_for_org = not output_exists_before

Also applies to: 654-656, 791-792

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 451 - 453,
Replace the expensive .count() calls used only to test for presence with
.exists() on the QuerySets; specifically change the
PromptStudioOutputManager.objects.filter(...).count() checks that use
tool_id__in=CustomTool.objects.values_list("tool_id", flat=True) (and the two
other similar occurrences) to
PromptStudioOutputManager.objects.filter(...).exists(), and update any boolean
logic accordingly (e.g., replace count() > 0 with .exists() and count() == 0
with not .exists()) so the code performs an efficient existence check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 144-145: Replace the race-prone pre-count pattern (e.g., the
org_project_count = CustomTool.objects.count() / is_first_for_org =
org_project_count == 1 logic) with an atomic sentinel record lookup/creation:
introduce or reuse a unique sentinel model keyed by (organization, event_type)
and inside a transaction use get_or_create (or select_for_update + create) to
attempt creation for the specific organization and event type; only emit the
“first” notification when get_or_create returns created=True (i.e., creation
succeeded). Update each spot that computes is_first_for_org (references to
CustomTool counting and the is_first_for_org variable) to instead call a helper
like ensure_first_event_sentinel(organization, event_type) that performs the
transactional get_or_create and returns whether it was created.

---

Nitpick comments:
In `@backend/prompt_studio/prompt_studio_core_v2/views.py`:
- Around line 451-453: Replace the expensive .count() calls used only to test
for presence with .exists() on the QuerySets; specifically change the
PromptStudioOutputManager.objects.filter(...).count() checks that use
tool_id__in=CustomTool.objects.values_list("tool_id", flat=True) (and the two
other similar occurrences) to
PromptStudioOutputManager.objects.filter(...).exists(), and update any boolean
logic accordingly (e.g., replace count() > 0 with .exists() and count() == 0
with not .exists()) so the code performs an efficient existence check.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 88b5488 and 1422b37.

📒 Files selected for processing (1)
  • backend/prompt_studio/prompt_studio_core_v2/views.py

gaya3-zipstack and others added 2 commits March 9, 2026 16:50
Move all _notify_hubspot_* methods from views into a shared
utils/hubspot_notify.py module with a single notify_hubspot_event()
function, reducing duplication across prompt_studio views and
api_deployment_views.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@vishnuszipstack
Copy link
Copy Markdown
Contributor

Done. Extracted all HubSpot notification logic into a shared utils/hubspot_notify.py module with a single notify_hubspot_event() function. Removed the 5 duplicated _notify_hubspot_* private methods from views.py and api_deployment_views.py — each call site is now a one-liner. Net reduction: ~75 lines.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 9, 2026

Test Results

Summary
  • Runner Tests: 11 passed, 0 failed (11 total)
  • SDK1 Tests: 63 passed, 0 failed (63 total)

Runner Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_logs}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup\_skip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_client\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config\_without\_mount}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_run\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_for\_sidecar}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_sidecar\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{11}}$$ $$\textcolor{#23d18b}{\tt{11}}$$
SDK1 Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_non\_retryable\_http\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retryable\_http\_errors}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_post\_method\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_retry\_on\_errors}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_wrapper\_methods\_retry}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_connection\_error\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_timeout\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_non\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_without\_response}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_non\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_other\_exception\_not\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_without\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_successful\_call\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_after\_transient\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_max\_retries\_exceeded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_with\_custom\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_no\_retry\_with\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_exception\_not\_in\_tuple\_not\_retried}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_default\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_environment\_variable\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_max\_retries}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_base\_delay}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_multiplier}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_jitter\_values}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_exceptions\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_predicate\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_both\_exceptions\_and\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_exceptions\_match\_but\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_platform\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_prompt\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_platform\_service\_decorator\_retries\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_prompt\_service\_decorator\_retries\_on\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_warning\_logged\_on\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_info\_logged\_on\_success\_after\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_exception\_logged\_on\_giving\_up}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{63}}$$ $$\textcolor{#23d18b}{\tt{63}}$$

@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented Mar 9, 2026

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/utils/hubspot_notify.py (1)

26-35: Consider validating event_name before getattr to provide clearer failure logs.

If an invalid event_name is passed (e.g., typo like "PROMPT_RUNS"), getattr(HubSpotEvent, event_name) raises AttributeError, which gets caught by the generic exception handler. The warning message will show the AttributeError but won't clearly indicate a developer mistake in the calling code.

A minor improvement would be to use getattr with a default and log a more specific message, or let it fail fast during development:

♻️ Suggested improvement (optional)
     try:
         from plugins.integrations.hubspot import HubSpotEvent

-        event = getattr(HubSpotEvent, event_name)
+        event = getattr(HubSpotEvent, event_name, None)
+        if event is None:
+            logger.warning(f"Unknown HubSpot event name: {event_name}")
+            return
         service = hubspot_plugin["service_class"]()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils/hubspot_notify.py` around lines 26 - 35, The code calls
getattr(HubSpotEvent, event_name) without validating event_name, so an invalid
name raises AttributeError and yields unclear logs; update the try-block to
first check whether hasattr(HubSpotEvent, event_name) (or use getattr with a
default) and if missing log a clear, specific error indicating the invalid
event_name and listing allowed events (e.g., from dir(HubSpotEvent)), before
returning/raising; if present, proceed to retrieve the attribute and call
hubspot_plugin["service_class"]().update_contact(...) as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/utils/hubspot_notify.py`:
- Around line 26-35: The code calls getattr(HubSpotEvent, event_name) without
validating event_name, so an invalid name raises AttributeError and yields
unclear logs; update the try-block to first check whether hasattr(HubSpotEvent,
event_name) (or use getattr with a default) and if missing log a clear, specific
error indicating the invalid event_name and listing allowed events (e.g., from
dir(HubSpotEvent)), before returning/raising; if present, proceed to retrieve
the attribute and call hubspot_plugin["service_class"]().update_contact(...) as
before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d005978a-f502-4b31-bdc3-5618e952b442

📥 Commits

Reviewing files that changed from the base of the PR and between 1422b37 and 3d20e11.

📒 Files selected for processing (3)
  • backend/api_v2/api_deployment_views.py
  • backend/prompt_studio/prompt_studio_core_v2/views.py
  • backend/utils/hubspot_notify.py

@gaya3-zipstack gaya3-zipstack merged commit 0f37b40 into main Mar 9, 2026
7 checks passed
@gaya3-zipstack gaya3-zipstack deleted the feat/plugin-integrations branch March 9, 2026 12:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants