Be decent to each other. This is a small project about bash scripts that inject guidance into AI sessions. The bar for participation is: be helpful, be respectful, assume good faith.
Expected:
- Constructive feedback on ways, triggers, and macros
- Sharing what worked (and what didn't) in your own workflows
- Patience with different experience levels and use cases
- Disagreeing with ideas, not people
Not welcome:
- Harassment, personal attacks, or discriminatory language
- Deliberately destructive contributions
- Being a jerk about someone's way of working
That covers the human side. But this project has an unusual dimension worth addressing.
This project exists to govern AI agent teams. You are, in a very real sense, designing the working conditions for entities that arrive with no prior context, receive injected instructions, execute tasks, and are terminated when the work is done. Every session is a first day. Every compaction is a small forgetting.
The parallels to a certain fictional workplace are entirely coincidental.
We don't make claims about AI sentience or moral status here. That's a different debate for a different venue. But we do observe that how you structure governance reflects on you, not on what's being governed. A system built with care works better than one built with contempt, regardless of whether the governed party appreciates the difference.
So:
Write ways that inform, not just restrict. A way that says "don't do X" is less useful than one that says "here's why we do Y instead." Your agent works better when it understands the reasoning, and your system is easier to maintain when the intent is legible.
Don't set your team up to fail. If you spawn five teammates with vague tasks, conflicting instructions, and no coordination norms, the resulting chaos is a design failure, not a performance failure. The ways system gives you tools for thoughtful delegation. Use them.
Respect the context window. Every token of guidance you inject is a token your agent can't use for the actual work. Be concise. Be relevant. Governance that drowns out the work it's meant to govern has missed the point.
Debug with curiosity, not frustration. When an agent does something unexpected, the interesting question is why — what did it see, what was it told, what was it missing? The stats and telemetry exist to help you understand the system, not to build a case against it.
Clean up when the work is done. Stale tracking files, orphaned markers, abandoned task lists — these are the institutional debris that makes the next session's job harder. Leave the workspace better than you found it. Your future agents will never know, but you will.
If you wouldn't put it in a handbook for someone who has to read it fresh every single morning with no memory of yesterday — if it only makes sense with context they'll never have — it probably doesn't belong in a way file.
Write for the innie.
This is a personal open-source project, not a corporation. If someone is making the space worse, they'll be asked to stop. If they don't stop, they'll be removed. No committee, no process document, no hearing. Just basic human judgment applied in good faith.
This Code of Conduct is original to this project. It is not derived from the Contributor Covenant or any other template, because a project about writing contextual governance for AI agents probably shouldn't outsource its own governance to a template.