version: "1.0"
steps:
security:
type: ai
schema: code-review
prompt: "Identify security vulnerabilities in changed files"- Fast local pre-commit hook (Husky)
npx husky add .husky/pre-commit "npx -y @probelabs/visor@latest --tags local,fast --output table || exit 1"Minimal chat loop (CLI/SDK):
version: "1.0"
checks:
ask:
type: human-input
group: chat
prompt: |
Please type your message.
reply:
type: ai
group: chat
depends_on: ask
ai:
disableTools: true
allowedTools: []
system_prompt: "You are general assistant, follow user instructions."
prompt: |
You are a concise, friendly assistant.
Conversation so far (oldest → newest):
{% assign history = '' | chat_history: 'ask', 'reply' %}
{% for m in history %}
{{ m.role | capitalize }}: {{ m.text }}
{% endfor %}
Latest user message:
{{ outputs['ask'].text }}
Reply naturally. Keep it short (1–2 sentences).
guarantee: "(output?.text ?? '').length > 0"
on_success:
goto: askNotes:
ask(human-input) produces{ text, ts }by default.reply(ai) responds and loops back toask.chat_history('ask','reply')merges both histories by timestamp with roles:type: human-input→role: "user"type: ai→role: "assistant"
Slack chat using the same pattern:
version: "1.0"
slack:
version: "v1"
mentions: all
threads: required
frontends:
- name: slack
config:
summary:
enabled: false
checks:
ask:
type: human-input
group: chat
prompt: |
Please type your message. (Posted only when the workflow is waiting.)
reply:
type: ai
group: chat
depends_on: ask
ai:
disableTools: true
allowedTools: []
# For chat-style Slack flows you can optionally turn off
# automatic PR/issue + Slack XML context and rely solely on
# chat_history + conversation objects:
# skip_transport_context: true
prompt: |
You are a concise, friendly assistant.
Conversation so far (oldest → newest):
{% assign history = '' | chat_history: 'ask', 'reply' %}
{% for m in history %}
{{ m.role | capitalize }}: {{ m.text }}
{% endfor %}
Latest user message:
{{ outputs['ask'].text }}
Reply naturally. Keep it short (1–2 sentences).
guarantee: "(output?.text ?? '').length > 0"
on_success:
goto: askRuntime behavior:
- First Slack message in a thread:
- Treated as
askinput. replyposts into the same thread.- Engine loops to
ask, posts a prompt, and saves a snapshot.
- Treated as
- Next Slack message in the same thread:
- Resumes from snapshot.
askconsumes the new message.replyposts a new answer and loops again.
Accessing normalized conversation context in prompts:
{% if conversation %}
Transport: {{ conversation.transport }} {# 'slack', 'github', ... #}
Thread: {{ conversation.thread.id }}
{% for m in conversation.messages %}
{{ m.user }} ({{ m.role }}): {{ m.text }}
{% endfor %}
{% endif %}- Under Slack,
conversationandslack.conversationare the same normalized object. - Under GitHub (PR/issue),
conversationis built from the body + comment history using the same{ role, user, text, timestamp }structure.
Customizing chat_history (roles, text, limits):
{% assign history = '' |
chat_history:
'ask',
'clarify',
'reply',
direction: 'asc',
limit: 50,
text: {
default_field: 'text',
by_step: {
'summarize': 'summary.text'
}
},
roles: {
by_step: {
'summarize': 'system'
}
},
role_map: 'ask=user,reply=assistant'
%}
{% for m in history %}
[{{ m.step }}][{{ m.role }}] {{ m.text }}
{% endfor %}Quick reference:
direction: 'asc' | 'desc',limit: Ntext.default_field,text.by_step[step]roles.by_step[step],roles.by_type[type],roles.defaultrole_map: 'step=role,other=role'as a compact override
See also:
- human-input-provider.md
- liquid-templates.md (Chat History Helper)
- output-history.md
- examples/slack-simple-chat.yaml
- Close the loop: Leaf steps use
on_success: goto: <entry-step>to end the workflow and return to a single top-levelhuman-input. Each new event (Slack message, webhook, CLI run) starts a fresh execution. - Inner loop: Add a local
human-inputand route inside a sub‑flow:- Example shape:
router → section-confirm → section-answer → section-confirm. - Use a control field (e.g.
output.done === true) intransitionsto exit the section back to the top-level entry step.
- Example shape:
- This pattern is transport-agnostic and works for Slack, GitHub, HTTP workflows, etc.
- See:
examples/slack-simple-chat.yamlfor a concrete implementation of both patterns.
-
Prefer
on_success.transitions/on_finish.transitionsfor branching:on_success: transitions: - when: "output && output.intent === 'chat'" to: chat-answer - when: "output && output.intent === 'project_help'" to: project-intent
-
Reserve
goto_js/run_jsfor legacy or very dynamic use cases. -
More details: fault-management-and-contracts.md, loop-routing-refactor.md.
-
Pattern A — central router + transitions (explicit routing):
- Use a single “router” step that sets control fields (e.g.
output.intent,output.kind). - Declare all branching in one place via
on_success.transitionson the router:router: type: ai on_success: transitions: - when: "output.intent === 'chat'" to: chat-answer - when: "output.intent === 'status'" to: status-answer chat-answer: depends_on: [router] if: "outputs['router']?.intent === 'chat'" status-answer: depends_on: [router] if: "outputs['router']?.intent === 'status'"
- Good when you want a single, centralized view of routing logic. Use
ifon branches for readability and to skip branches cleanly; reserveassumefor hard dependency checks only.
- Use a single “router” step that sets control fields (e.g.
-
Pattern B — distributed routing via
depends_on+if:- Omit transitions entirely and let each branch decide whether it should run:
router: type: ai # no on_success.transitions chat-answer: depends_on: [router] if: "outputs['router']?.intent === 'chat'" status-answer: depends_on: [router] if: "outputs['router']?.intent === 'status'"
- The DAG (
depends_on) defines possible flows;ifconditions select the active branch(es) per run. - This works well when routing is simple or when you prefer fully local branch declarations over a central router table.
- Omit transitions entirely and let each branch decide whether it should run:
- Apply to any workflow, not just chat:
external– step changes external state:- Examples: GitHub comments/labels, HTTP POST/PUT/PATCH/DELETE, ticket creation, updating CI/CD or incident systems, filesystem writes in a shared location.
- If someone can look elsewhere and see a change after this step, it’s usually
external.
internal– step changes the workflow’s control-plane:- Examples: forEach parents that fan out work; steps with
on_* transitions/gotothat decide what runs next; script/memory steps that set flags used byif/assume/guarantee. - If it mostly “steers” the run (not user-facing output), treat it as
internal.
- Examples: forEach parents that fan out work; steps with
policy– step enforces org or safety rules:- Examples: permission checks (who may deploy/label), change windows, compliance checks (branches, commit format, DCO/CLA).
- Often used to gate
externalsteps (e.g. only label when policy passes).
info– read-only / non-critical:- Examples: summaries, hints, dashboards, advisory AI steps that do not gate other critical steps and do not mutate anything directly.
- For
internal/externalsteps, group fields in this order:some-step: type: ai | script | command | ... group: ... depends_on: [...] criticality: internal # or external / policy / info assume: - "upstream condition" # never reference this step's own output here guarantee: "output?.field != null" # assertions about this step's output schema: # JSON Schema when output is structured ...
- Use
assumefor preconditions about upstream state (memory, env,outputs[...]). - Use
guaranteefor postconditions about this step’s own output (shape, control flags, size caps). - For
infosteps, contracts are recommended but optional; keepassume+guaranteeadjacent when present. - More details: criticality-modes.md, fault-management-and-contracts.md.
- For structured outputs (routers, script integrations, control signals), prefer real JSON Schema:
router-step: schema: type: object properties: intent: type: string enum: [chat, summarize, escalate] target: type: string required: [intent]
- For text responses, it can still be useful to wrap in an object:
answer: schema: type: object properties: text: { type: string } required: [text] guarantee: "(output?.text ?? '').length > 0"
- Use
schema: plainonly when output shape is genuinely unconstrained.
Tip: When you define a JSON Schema, you generally do not need to tell the model “respond only as JSON”; describe the semantics in the prompt and let the renderer/schema enforce shape.
- Prefer clear, concise expressions:
outputs['router']?.intent === 'chat'!!outputs['status-fetch']?.projectoutput?.done === true
- Avoid noisy fallbacks like
(outputs['x']?.kind ?? '') === 'status'whenoutputs['x']?.kind === 'status'is equivalent. - These conventions apply uniformly to any provider (
ai,command,script,github,http_client, etc).
When using type: command steps:
Avoid external tool dependencies like jq, yq, python, etc.:
- They may not be installed in all environments (GitHub Actions, Docker, CI)
- Use
transform_jsto parse and transform output instead - Keep shell commands simple:
grep,sed,awk,sort,headare universally available
# Bad - requires jq
extract-data:
type: command
exec: |
echo "$TEXT" | grep -oE '[A-Z]+-[0-9]+' | jq -R -s 'split("\n")'
parseJson: true
# Good - use transform_js for parsing
extract-data:
type: command
exec: |
echo "$TEXT" | grep -oE '[A-Z]+-[0-9]+' | sort -u
transform_js: |
const lines = (output || '').trim().split('\n').filter(Boolean);
return { data: lines, count: lines.length };Prefer line-separated output over JSON from shell:
- Simple to parse with
transform_js - No need for
parseJson: true - More robust across different shells/environments
Use transform_js for structured output:
- The sandbox provides
output(command stdout as string) - Return an object with the fields you need
- Works consistently across all environments
The --no-mocks flag runs your test cases with real providers instead of injecting mock responses. This is essential for:
- Debugging integration issues - See actual API responses and errors
- Capturing realistic mock data - Get real output to copy into your test cases
- Validating credentials - Verify environment variables are set correctly
- Developing new workflows - Build tests incrementally with real data
# Run all test cases with real providers
visor test --config my-workflow.yaml --no-mocks
# Run a specific test case with real providers
visor test --config my-workflow.yaml --no-mocks --only "my-test-case"When running with --no-mocks, Visor captures each step's output and prints it as YAML you can copy directly into your test case:
🔴 NO-MOCKS MODE: Running with real providers (no mock injection)
Step outputs will be captured and printed as suggested mocks
... test execution ...
📋 Suggested mocks (copy to your test case):
mocks:
extract-keys:
data:
- PROJ-123
- DEV-456
count: 2
fetch-issues:
data:
- key: PROJ-123
summary: Fix authentication bug
status: In Progress
Copy the YAML under mocks: into your test case's mocks: section.
-
Start with a minimal test case (no mocks):
tests: cases: - name: my-new-test event: manual fixture: local.minimal workflow_input: text: "Fix bug PROJ-123"
-
Run with
--no-mocksto capture real outputs:visor test --config workflow.yaml --no-mocks --only "my-new-test"
-
Copy the suggested mocks into your test case:
tests: cases: - name: my-new-test event: manual fixture: local.minimal workflow_input: text: "Fix bug PROJ-123" mocks: extract-keys: data: ["PROJ-123"] count: 1 # ... rest of captured mocks
-
Add assertions based on the real data:
expect: workflow_output: - path: issue_count equals: 1
-
Run normally to verify mocks work:
visor test --config workflow.yaml --only "my-new-test"
When a test fails with mocks, use --no-mocks to see what's actually happening:
# See real API responses and errors
visor test --config workflow.yaml --no-mocks --only "failing-test"
# Common issues revealed:
# - Missing or expired credentials
# - API endpoint changes
# - Unexpected response formats
# - Network/timeout issuesThe real error messages and responses help identify whether the issue is with your mocks or the actual integration.
- docs/NPM_USAGE.md – CLI usage and flags
- docs/GITHUB_CHECKS.md – Checks, outputs, and workflow integration
examples/– MCP, Jira, and advanced configs