-
Notifications
You must be signed in to change notification settings - Fork 1
Copier update: pulumi install fix #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -94,6 +94,21 @@ This phase is **not part of the regular TDD workflow** and must only be applied | |
| - Once sufficient understanding is achieved, all spike code is discarded, and normal TDD resumes starting from the **Red Phase**. | ||
| - A Spike is justified only when it is impossible to define a meaningful failing test due to technical uncertainty or unknown system behavior. | ||
|
|
||
| ### If a New Test Passes Immediately | ||
|
|
||
| If a newly written test passes without any implementation change, do not assume it is correct. Verify it actually exercises the intended behavior: | ||
|
|
||
| 1. Identify the implementation line most likely responsible for the pass | ||
| 2. Temporarily remove that line | ||
| 3. Run the **full test suite** (not just the new test) | ||
|
|
||
| Then interpret the result: | ||
|
|
||
| - **Only the new test fails** — the line was never driven by a prior test. This is accidental over-implementation: delete the line permanently and proceed to the green phase to reintroduce it properly. | ||
| - **Other existing tests also fail** — the line was already legitimately required by prior work. The new test is valid regression coverage. Restore the line; the test is confirmed correct as written. | ||
|
|
||
| In both cases, confirm the new test fails for the expected reason before proceeding (the right assertion, not a syntax or import error). | ||
|
Comment on lines
+101
to
+110
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Keep template guidance aligned with targeted-first test execution. Line 103 currently pushes a full-suite run too early. To match established workflow, validate the new test’s red state in isolation first, then run the full suite for dependency/regression signal. Suggested wording update-1. Identify the implementation line most likely responsible for the pass
-2. Temporarily remove that line
-3. Run the **full test suite** (not just the new test)
+1. Identify the implementation line most likely responsible for the pass
+2. Temporarily remove that line
+3. Run the **new test in isolation** and confirm it fails for the expected assertion reason
+4. Then run the **full test suite** to check whether prior tests also depend on that lineBased on learnings: "When iterating on a single test, run that test in isolation first... Only run the full suite once the target test behaves as expected." 🤖 Prompt for AI Agents |
||
|
|
||
| ### General Information | ||
|
|
||
| - Sometimes the test output shows as no tests have been run when a new test is failing due to a missing import or constructor. In such cases, allow the agent to create simple stubs. Ask them if they forgot to create a stub if they are stuck. | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reorder validation flow to run the targeted test before the full suite.
At Line 103, requiring a full-suite run immediately conflicts with the project’s targeted-first test iteration rule. After temporarily removing the line, first run the new test alone to confirm it turns red for the expected assertion reason; then run the full suite to assess prior coverage impact.
Suggested wording update
Based on learnings: "When iterating on a single test, run that test in isolation first... Only run the full suite once the target test behaves as expected."
🤖 Prompt for AI Agents