Replies: 4 comments
-
|
that policy looks good! |
Beta Was this translation helpful? Give feedback.
-
|
The accountability principle in the Icechunk policy is the right framing. The problem isn't AI use, it's AI use without structured intent. When someone prompts "add a feature to zarr-python", there's no constraints block, no reference to your coding conventions, no output format expectations. The LLM optimizes for the literal task and produces code that technically works but doesn't fit the project. Accountability only helps after the fact. The deeper fix is giving contributors prompts structured around your actual requirements. If the prompt has an explicit I've been building flompt for this: decompose tasks into typed blocks (objective, constraints, output_format) and compile to Claude-optimized XML. Makes it easy to bake project-specific constraints into AI-assisted contributions upfront. Open-source: github.com/Nyrok/flompt |
Beta Was this translation helpful? Give feedback.
-
|
I think this is good. Would suggest to add a paragraph along the lines: PRs that cannot be reviewed in reasonable time with reasonable effort will be closed regardless of their potential usefulness and correctness. Use AI tools not only to develop code and fix issues but to prepare better PRs that can be reviewed more efficiently. |
Beta Was this translation helpful? Give feedback.
-
|
I like the policy. More or less mirrors the one we adopted for napari which is based on Zulip's. I also want to point folks to what I've come to call "Melissa's List" of AI contribution policies. Lots of interesting material there as communities figure this stuff out: https://github.com/melissawm/open-source-ai-contribution-policies |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
AI coding tools are increasingly common in open source development. I'd like us to converge on a shared policy for zarr-python so we can handle AI-assisted contributions consistently — from core devs and outside contributors alike.
The Icechunk project recently adopted a policy that could serve as a starting point:
https://github.com/earth-mover/icechunk/blob/main/icechunk-python/docs/docs/ai-policy.md
The core principle: AI tools are welcome, but you are accountable for every change you submit — you must understand it, be able to explain it, and have fully reviewed it.
Some questions for discussion:
I'm personally in favor of embracing these tools with clear guardrails. The primary concern is around maintainer review capacity. Would love to hear other perspectives, especially from @zarr-developers/core-devs.
Beta Was this translation helpful? Give feedback.
All reactions