Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
de400bb
feat: add zero-PAT spawner for one-click CODA provisioning
dgokeeffe Apr 1, 2026
5b5422e
test: rewrite spawner gate tests to match zero-PAT M2M implementation
dgokeeffe Apr 6, 2026
fc0738c
fix: add model tier mappings and disable experimental betas in Claude…
dgokeeffe Apr 7, 2026
a051fa3
feat: update spawner UI to zero-PAT flow with admin email override
dgokeeffe Apr 7, 2026
79a603c
chore: add .syncignore to exclude vendor/.venv from workspace sync
dgokeeffe Apr 7, 2026
10b9e9d
fix: remove pyproject.toml/uv.lock from spawner to fix fresh deploy
dgokeeffe Apr 7, 2026
7dbb1bd
fix: retry on SP propagation delay for grants and owner resolution
dgokeeffe Apr 8, 2026
c4ba7ca
fix: align template app.yaml with upstream — plain gunicorn, drop gat…
dgokeeffe Apr 8, 2026
79f7089
fix: use PyPI pins instead of git URLs for cryptography and requests
dgokeeffe Apr 8, 2026
a627a76
fix: apply theme on page load to prevent white-on-white flash
dgokeeffe Apr 8, 2026
5693f85
feat: parallel provisioning — deploy multiple users without blocking
dgokeeffe Apr 8, 2026
2610305
fix: resolve mlflow stop hook import error and re-enable tracing
dgokeeffe Apr 8, 2026
5d8a2f2
fix: disable mlflow tracing by default, add gh credential helper
dgokeeffe Apr 8, 2026
f44027c
docs: note gh auth prerequisite for credential helper
dgokeeffe Apr 8, 2026
bc78e22
feat: fix SDK auth, add wsync command, tune PAT rotation and Claude s…
dgokeeffe Apr 8, 2026
4a9f0d8
fix: inject fresh token in content-filter proxy to survive PAT rotation
dgokeeffe Apr 8, 2026
cfa6f01
Merge remote-tracking branch 'origin/main' into feat/zero-pat-spawner
dgokeeffe Apr 8, 2026
1b35258
fix: normalize emails to lowercase for case-insensitive authorization
dgokeeffe Apr 9, 2026
409725b
fix: probe AI Gateway reachability before using auto-discovered URL
dgokeeffe Apr 9, 2026
c812b2b
fix: add exponential backoff and batch throttling to redeploy-all
dgokeeffe Apr 9, 2026
f621e1e
feat: add bulk deploy UI for pasting attendee email lists
dgokeeffe Apr 16, 2026
482a237
feat: embed coles-vibe-workshop project into spawned apps
dgokeeffe Apr 16, 2026
55bcce4
feat: add BDD skills (fe-bdd-tools) to embedded workshop project
dgokeeffe Apr 16, 2026
c0d2d8f
Merge remote-tracking branch 'origin/main' into feat/zero-pat-spawner
dgokeeffe Apr 16, 2026
715e2c3
fix: add concurrency throttling and retry logic to bulk provisioning
dgokeeffe Apr 16, 2026
57e989c
feat: default to Claude Opus 4.7 and harden Bash deny rules
Apr 17, 2026
7dbc9ed
feat: add Claude Code brain sync to workspace + session hooks
dgokeeffe Apr 17, 2026
04344da
feat: default to Explanatory output style in CODA
dgokeeffe Apr 17, 2026
db88b27
feat: add track-specific venv setup skill and requirements
Apr 17, 2026
0bb44d4
feat: add BDD testing skills from databricks-bdd-tools
Apr 17, 2026
e5a9a34
feat: add fast local test patterns for all tracks (no Spark/Java requ…
Apr 17, 2026
15f7575
feat: ship coda-essentials as a bundled plugin marketplace
dgokeeffe Apr 17, 2026
07d32e0
feat: add coda-databricks-skills plugin from latest ai-dev-kit
dgokeeffe Apr 17, 2026
789f052
feat: detect secure-egress + route docs to learn.microsoft.com
dgokeeffe Apr 17, 2026
f983ef6
feat: make deepwiki + exa MCPs opt-in instead of default-on
dgokeeffe Apr 17, 2026
31e7c10
fix: bump mlflow-skinny 3.10.1 -> 3.11.1 to match Apps runtime
dgokeeffe Apr 18, 2026
c0402a8
fix: pin mlflow-tracing to 3.11.1 to match runtime image
dgokeeffe Apr 18, 2026
888e0e2
fix(quiz-app): close stored-XSS via team name + add CSP
dgokeeffe Apr 18, 2026
a3b75d9
feat: add /cache-stats slash command to coda-essentials
dgokeeffe Apr 18, 2026
56d57b6
feat: enable MLflow tracing + bound the Stop-hook handler
dgokeeffe Apr 18, 2026
80ac04d
refactor: run MLflow Stop hook async with a 30s ceiling
dgokeeffe Apr 18, 2026
96937aa
fix(mlflow-hook): pipe Claude Code's hook-event JSON to the handler
dgokeeffe Apr 18, 2026
1002d3f
fix: write Claude Code plugin state files so bundled marketplace loads
dgokeeffe Apr 18, 2026
5532cdf
fix(plugins): stage plugins into ~/.claude/plugins/cache/ expected by…
dgokeeffe Apr 18, 2026
ca7be10
fix(plugins): also copy commands/agents to ~/.claude/ user-level
dgokeeffe Apr 18, 2026
8394ca1
bugfix: for 30 character app name limit
dgokeeffe Apr 19, 2026
7027539
feat: inject fork-wide directives and detect available editors
dgokeeffe Apr 19, 2026
24aec49
chore: add .databricksignore to exclude .venv and caches from sync
dgokeeffe Apr 19, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
105 changes: 105 additions & 0 deletions .claude/skills/bdd-features/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
name: bdd-features
description: "This skill should be used when the user asks to \"write Gherkin\", \"create feature files\", \"generate BDD scenarios\", \"write acceptance tests in Gherkin\", \"create Behave features\", \"write Given When Then tests\", \"BDD test cases for my pipeline\", \"Gherkin for Unity Catalog\", or wants to translate requirements into Gherkin feature files for Databricks."
user-invocable: true
---

# BDD Features — Gherkin Feature File Generation

Generate well-structured Gherkin `.feature` files for Databricks workloads. Translate requirements, user stories, or existing code into behavior specifications using Given/When/Then syntax.

## When to use

- Translating requirements or user stories into Gherkin acceptance criteria
- Creating feature files for Databricks pipelines, catalog permissions, jobs, or Apps
- Writing regression tests in Gherkin for existing functionality
- Generating Scenario Outlines for data-driven testing

## Process

### 1. Identify the test subject

Determine what to test. Read the relevant code or ask the user:

- A Lakeflow SDP pipeline definition → pipeline behavior tests
- Unity Catalog grants/policies → permission verification tests
- A FastAPI Databricks App → API endpoint tests
- A notebook or job → execution and output validation tests
- SQL transformations → data quality and correctness tests

### 2. Write the feature file

Place feature files in the appropriate subdirectory under `features/`:

```
features/
├── catalog/permissions.feature
├── pipelines/events_pipeline.feature
├── apps/api_endpoints.feature
├── jobs/etl_notebook.feature
└── sql/data_quality.feature
```

**Structure every feature file with:**

1. **Tags** — `@domain`, `@smoke`/`@regression`/`@integration`, optional `@slow` or `@wip`
2. **Feature header** — name + As a / I want / So that narrative
3. **Background** — shared Given steps (workspace connection, test schema)
4. **Scenarios** — one behavior per scenario, descriptive names

Refer to `references/gherkin-patterns.md` for Databricks-specific Gherkin patterns covering:
- Pipeline lifecycle (full refresh, incremental, failure handling)
- Unity Catalog grants, column masks, row filters
- App endpoint testing with SSO headers
- Job/notebook execution and output validation
- SQL data quality assertions
- Scenario Outlines for parameterized testing

### 3. Gherkin writing principles

**Declarative, not imperative.** Describe *what* the system should do, not *how* to click buttons:

```gherkin
# Good — declarative
When I grant SELECT on "catalog.schema.table" to group "readers"
Then the group "readers" should have SELECT permission

# Bad — imperative
When I open the Catalog Explorer
And I click on the table "catalog.schema.table"
And I click "Permissions"
And I click "Grant"
And I select "SELECT"
And I type "readers" in the group field
And I click "Save"
```

**One behavior per scenario.** If a scenario tests two independent things, split it.

**Use Backgrounds for shared setup.** Avoid repeating connection/schema steps across scenarios.

**Scenario Outlines for data variations.** When the same behavior is tested with different inputs, use Examples tables instead of duplicating scenarios.

**Tag strategically:**
- `@smoke` — fast, critical-path tests (< 30 seconds each)
- `@regression` — thorough coverage (minutes)
- `@integration` — needs live workspace (skip in unit test CI)
- `@slow` — pipeline tests, job executions (> 2 minutes)

**CRITICAL — Curly braces break step matching.** Behave uses the `parse` library for step matching. `{anything}` in feature file text is interpreted as a capture group, not a literal. Never use `{test_schema}.table_name` in feature files — it will fail to match step definitions. Instead, use short table names (`"customers"`) and resolve the schema in step code.

**Trailing colons matter.** When a step has an attached data table or docstring, the `:` at the end of the Gherkin line IS part of the step text. The step pattern must include it: `@given('a table "{name}" with data:')` — not `with data` (no colon).

### 4. Validate step coverage

After writing features, check that step definitions exist for all steps:

```bash
uv run behave --dry-run
```

Any undefined steps will be reported with suggested snippets. Hand those to the `bdd-steps` skill for implementation.

## Additional resources

- **`references/gherkin-patterns.md`** — Complete Databricks Gherkin pattern library with examples for every domain
Loading