Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions text/3936-contribution-policy.md
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this policy disappoints me. Not because the general idea isn't good, but because it feels… spineless. It's trying to generalise something that can't easily be generalised.

Compare to an existing moderation team policy which has also been accepted: https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#spam-policy

The reason why I label this as spineless is because it seems desperate to try and classify "slop" contributions as something that isn't descriptive enough: "low effort."

Here's the reality: LLM contributions are uniquely positioned as something that is resource-intensive, requires minimal input from a contributor, and creates maximum work for maintainers. They cannot be generalised without losing specificity and should not be.

Overall, the team has had an overwhelmingly negative response toward LLMs and despite this, a few extremely LLM-positive folks have been resisting the urge to create a proper LLM policy, with the fear that we might reach too far. I think that choosing to just label this as "anti-low-effort" is doing a massive disservice to those of us who have already spoken in favour of a stricter LLM policy.

I'd been holding off on working more into this but the presence of this PR seems to be forcing the issue a little bit. We already have moderation policies to stem the flow of spam, and so, it seems more apt to hold off in favour of a stricter, more robust policy.

Right now, ironically, the "no low-effort contributions" policy seems to be, itself, a low-effort contribution. It doesn't follow the RFC template at all and has a number of paragraphs that can almost be counted on a singular hand. Compared to all the stuff folks have written on existing "no slop" contribution policies, it seems to completely ignore those in favour of a spineless policy.

Additionally, the desire to close the discussion as "too heated" even before a discussion has began, while also pushing to vote on it immediately, is incredibly telling. It gives the opinion that you don't trust the community at all and are expecting backlash… from whom? I am glad you chose to not close this, but am very disappointed.

I have more specific comments I'll add on specific points.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also putting this here for lack of a better place: I've opened a thread with some polls on Zulip to get a basic "vibe check" for how people feel about various AI policy implementations, and whether they feel they should apply. This is 100% just to get a feel for what kinds of policies people would be willing to accept, and is not meant to be a place to suggest what should be done, just to get a feel for what the shape of an ultimate policy might be. Although Zulip doesn't let you make votes private, I figure I should explicitly add that this thread is not a place to discuss anyone's views on the matter, and is purely for getting an aggregate view of the community's opinion specifically when it comes to policy.

#general > Vibecoding vibecheck

There is also the already-existing thread on project perspectives on AI, which has been orthogonal to any discussion of policy: #council > Project perspectives on AI

Also worth adding that, although the poll explicitly includes a policy on slop for completeness, it's unlikely that the final policy will allow "slop contributions" in any form, considering how much work so far has been done to quell spam against the rust-lang repository and how the community consensus, even among the most diehard LLM fans, has been that you shouldn't just spam the repo with changes that you haven't reviewed.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd been holding off on working more into this but the presence of this PR seems to be forcing the issue a little bit. We already have moderation policies to stem the flow of spam, and so, it seems more apt to hold off in favour of a stricter, more robust policy.

It's a reasonable point, I think, that maybe we should take it more slowly. Maybe it's OK to leave things, for now, to the moderators who "know it when they see it". Maybe we don't need the council to adopt a policy immediately. In other places I've suggested similar things may be true.

On the other hand, it's also possible that adopting some policy may help us in avoiding or moving on from the kind of upheavals that happened over the weekend. Reasonable people have made that point too.

This proposal is acting on that second theory: that adopting some policy -- that encodes the intersection of the principles we share -- might help us to move forward.

(As also mentioned in #3936 (comment).)

The reason why I label this as spineless is because it seems desperate to try and classify "slop" contributions as something that isn't descriptive enough: "low effort."

While I think your characterization to have unnecessary bite, it's true that my goal in drafting this policy was to write down the principles on which, I believe, we already agree. The idea is, by starting with that intersection and writing down in plain language what we want and don't want -- things we may have taken for granted before -- that maybe that helps us to move forward.

You've made a number of harsh criticisms -- in this subthread and others -- but I'm curious. Given that there are a wide spectrum of views on this, and given that there are many practical considerations (see, e.g., the various Zulip threads where these have been enumerated), how do you see this ending? You would propose to start, I imagine, with a less "spineless" policy proposal, but many of the options for that would struggle to gain consensus. So what do we do then? Do you think there's not some risk in not adopting some policy now, given what occurred over the weekend?

On this:

Right now, ironically, the "no low-effort contributions" policy seems to be, itself, a low-effort contribution. It doesn't follow the RFC template at all and has a number of paragraphs that can almost be counted on a singular hand. Compared to all the stuff folks have written on existing "no slop" contribution policies, it seems to completely ignore those in favour of a spineless policy.

Additionally, the desire to close the discussion as "too heated" even before a discussion has began, while also pushing to vote on it immediately, is incredibly telling. It gives the opinion that you don't trust the community at all and are expecting backlash… from whom? I am glad you chose to not close this, but am very disappointed.

(See also #3936 (comment).)

I'd encourage you again to reflect on the request at the top of this thread:

Important

This is a difficult issue. Remember that we're all colleagues trying to do the best for Rust and that we all need to continue to work together. Please keep in mind that people have many different experiences and earnestly hold many different views. Please focus on assuming good intent, being curious, and acting compassionately.

I didn't lock the PR (see #3936 (comment)), as had been proposed (and people had proposed even more drastic measures), because I do want to have faith in the ability of our community to have a thoughtful, curious, and compassionate discussion. But we each must be careful -- we must demonstrate this ourselves -- or there is real risk that this faith will have proved misplaced.

Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Travis, I'm going to be honest, you repeatedly telling me to reread the request at the top of the thread is feeling less polite and more… aggressive, and I'd like you to consider your actions as well. Telling someone to watch what they say is not a neutral act, and in your case, it feels almost like a threat. I know very well what the rules are, and have even been repeatedly reminded because I have overstepped in cases, and should not here. I'm trying to be as careful as possible in my words here without compromising on what I'd like to say.

As far as endgame goes, genuinely, I would rather have a discussion first. Like, if nothing else, that's the point I want to emphasise first.

So far, a lot of discussion has gone into how people feel about AI, and summarising what people think about AI, but absolutely none of it has gone into policy discussion, effectively, minus the stopgap spam policies. (Which, for the record, have been widely received with minimal disagreements. Again, I think that stopgap policies can exist without the RFC process.)

Right now, this feels like an attempt to end discussion, even though I know that's not the intent. Having discussions about policy is inherently exhausting and I sort of treat the RFC process as a sort of endgame status: this is the last time where we have a community discussion on the matter unless there are major unaddressed points. To me, this RFC says: there is no LLM policy, only a "low-effort contribution" policy, and that's final, regardless of whether that's actually what you're trying to say.

Like, I hope that we can all agree that discussions about policy, and politics, (fun fact, in French, these are the same word), are exhausting. My goal would be to have as many informal discussions about policy as possible before the formal policy discussion (that is, an RFC) to minimise the amount of public political discussions need to be had about the subject. Because, ultimately, those are exhausting.

Like, to me, it's exhausting to even have to clarify to you that I'm not here to fight anyone; I'm here to have a discussion. And it feels like at every step of the way, discussion is being strongly discouraged. Hell, the reason why I felt the need to comment on your repeated insistence on quoting this one bit from the thread is because it feels like you're explicitly telling me to stop talking here.

Ultimately, I feel that LLMs are toxic to the entire world, and that any usage of them is a net negative on the world, regardless of how useful they are. But I do not think that simply stating this is helpful here because all it does is make people turn off their brain and want to argue about it. The reason why I clarify this here now is only to point out how, despite my ethical opinion being dead-set on absolutely no LLM usage whatsoever, I would rather have a policy discussion to see what is reasonable to ask for and what's not, rather than to quell any discussion on the matter.

Right now, starting an RFC without having that discussion feels like further attempts to silence discussion, even though the RFC process is ironically all about discussion.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Travis, I'm going to be honest, you repeatedly telling me to reread the request at the top of the thread is feeling less polite and more… aggressive, and I'd like you to consider your actions as well. Telling someone to watch what they say is not a neutral act, and in your case, it feels almost like a threat.

Yes, I understand how it could feel this way. But you've opened many threads. Each has a separate risk of spiraling if we're not careful. While you're seeing this message that we should take care repeated, the others I'm worried about (new people coming into this PR and responding to remarks that can be read as inflammatory) may not see this message in time unless it's repeated where such remarks are happening.

Copy link
Copy Markdown
Contributor Author

@traviscross traviscross Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as endgame goes, genuinely, I would rather have a discussion first. Like, if nothing else, that's the point I want to emphasise first. So far, a lot of discussion has gone into how people feel about AI, and summarising what people think about AI, but absolutely none of it has gone into policy discussion... My goal would be to have as many informal discussions about policy as possible before the formal policy discussion (that is, an RFC) to minimise the amount of public political discussions need to be had about the subject

Thanks for this reply. This is helpful in understanding where you're coming from on this.

I had originally proposed to do this policy in a non-RFC form (in rust-lang/leadership-council#273 (comment)); it momentarily moved into FCP. If I hear you correctly, you interpret the fact of us doing this now in an RFC form as making it more final. I can see why it would seem that way. But I don't think it does, and that wasn't the goal in doing this as an RFC. There was simply a worry that, procedurally, we should not a adopt any policy of this nature for the Project without it being in the form of an RFC, because the RFC process is how we ensure the proposal gets sufficient visibility within the Project and that we're able to collect wide feedback from team members. So I took the same policy that had been proposed in non-RFC form and proposed it here as an RFC instead.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's equally fair to try and include the message in each thread to ensure that others see it as well. I think it's worth clarifying that you're more than welcome to clarify this in your comments themselves; saying "for everyone reading this thread, I just want to reiterate this" would go a long way, but in all the comments, you were directly addressing me.

Making many threads was probably a bad idea. Especially considering how, when one thread was deleted for mentioning an off-limits topic (I wasn't aware this was officially decided beforehand), I essentially realised that no points of value were really lost.

I'm mostly just repeating myself here, and I wouldn't mind if you ultimately decided to mark my other threads as resolved for that reason. I'm just very defensive, and rightfully so, for reasons that if I repeated here, well, I'd be repeating myself. Bureaucratic nuances aren't really something to get into passionate debate about, and that's effectively what's happening here.

To be clear, I would be much more relieved if all this discussion were so boring. Boring discussions don't have high stakes.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's the reality: LLM contributions are uniquely positioned as something that is resource-intensive, requires minimal input from a contributor, and creates maximum work for maintainers. They cannot be generalised without losing specificity and should not be.

Amen! It's fine if folks use LLMs on their own time, but even if an LLM edited document winds up reasonable, then it'll still be improved by another human rewrite pass.

Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
- Feature Name: N/A
- Start Date: 2026-03-13
- RFC PR: [rust-lang/rfcs#3936](https://github.com/rust-lang/rfcs/pull/3936)
- Issue: [rust-lang/leadership-council#273](https://github.com/rust-lang/leadership-council/issues/273)

## Summary
[summary]: #summary

We adopt a *be present* contribution policy for the Rust Project.

## Design axioms

The policy rests on four principles — four *beliefs* about how to approach drafting a successful policy.

- **Let's start from common ground.**
- I.e., people have diverse views; let's start with those that we share.
- **What matters is what's in front of us.**
- I.e., what comes over the wall is what defines our experience; we can tell when a PR is well reviewed and when a contributor understands it.
- **Policy defines the unacceptable, not the disappointing.**
- I.e., policy needs to err on the side of avoiding false positives because we'll put moral weight behind these determinations. Well-meaning people acting reasonably will still sometimes disappoint us, and that's not what we're trying to catch.
- **Contributing requires being present.**
- I.e., we expect the person coming to us to be engaged in the work and with us. This is true whether the person is using tools or whether the person is bringing to us the work of an internal team.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some basic background would probably be useful here. Basically laying out the increasing trend in automated PRs, the project's shared goals, and the fact that there is no overall consensus on the best way to approach LLM usage.

I can take a stab later.

Copy link
Copy Markdown
Member

@jieyouxu jieyouxu Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikomatsakis you might be able to recycle some of the text/links from the opening issue comment of rust-lang/leadership-council#273 (comment) (see the Motivation, Prior Art, etc. sections)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I wanted to suggest an additional axiom, possibly associated with the lets start with common ground axiom. This suggestion feels aligned with what I had in mind:

"acknowledge conflict: This policy should acknowledge where disagreements exist to set the stage for discussions navigating them and to ensure we're not avoiding conflicts that needs to be resolved."

## Contribution policy

Contributing to the Rust Project requires *being present* — present for the work and present when working with maintainers.

- **Effort**: Being present means pulling your weight — putting in the same level of effort a maintainer will have to put in to review it. This level varies by the task and by the cost of review, but it's never less than being careful and thoughtful.
- **Accountability**: Being present means being responsible for everything you send us. We expect you to be involved with the work, to understand it, to be able to explain it, to check it carefully, and to respect our time.
- **Compassion**: Being present means engaging with reviewers as collaborators. Reviewers take your work seriously and use care and kindness in interactions. Help them help everyone by listening carefully, reflecting, and replying compassionately.

When these criteria are not satisfied, we say that the contributor is *not present*.

Maintainers and moderators are **not** required to put effort into explaining rejection of contributions where the contributor is not present.

*Contributions* include pull requests, issues, proposals, and comments.

### Examples of failing to be present

These illustrate how to apply the policy. The list is not exhaustive.

- Submitting AI-generated work when you weren't in-the-loop, when you haven't checked it with care, when you don't understand it, or when you can't explain it to a reviewer fails *accountability*. If you haven't engaged with the work and can't engage with review questions about it, you aren't being present as its contributor.
- Feeding reviewer questions into an AI tool and proxying the output directly back fails *compassion*. The reviewer is investing in you; that investment requires your presence.
- Submitting work — whether AI-generated, written by others (and used with permission), or written by hand — without exercising care and attention proportional to what you're asking of reviewers fails *effort*. Presence is incompatible with carelessness and inattention.
Copy link
Copy Markdown
Contributor Author

@traviscross traviscross Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As written, this is stated generically, bounded by a particular theory of what (I think) we want. Much of the discussion has focused on that theory or on the earlier one. But I'd be curious to hear any concerns that focus on the policy when mechanically monomorphized:

  • Submitting AI-generated work when you weren't in-the-loop is prohibited.
  • Submitting AI-generated work when you haven't checked it with care is prohibited.
  • Submitting AI-generated work when you don't understand it is prohibited.
  • Submitting AI-generated work when you can't explain it to a reviewer is prohibited.
  • Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited.
  • Submitting AI-generated work without exercising care and attention proportional to what you're asking of reviewers is prohibited.

Certainly I expect that some won't think this goes far enough in general. But does anyone believe this does not sufficiently prohibit the cases that have been an acute problem for us recently? Does anyone think this does not prohibit "slop"? (Does anyone think this goes too far?)

I'm curious to hear. This focused feedback would help with further drafting.

Copy link
Copy Markdown
Contributor Author

@traviscross traviscross Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As our newest council member, @joshtriplett, I'd be particularly curious to hear from you on this focused question, since we haven't yet had the opportunity to hear from you in a council meeting.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this covers about half of what I'm seeing. Is it worth expanding a bit on the "disappointing" and "weird" categories in an "out of scope" section? Or would that risk confusing the policy?

For example, I'm seeing:

  • people use LLMs for their own understanding, but get a weird idea, and they write code or comments that are disconnected from the original goal (they checked, understood, and explained it, it's just not… focused)
  • contributors over-explain, maybe by imitating LLM verbosity, but it's almost certainly their own words

This could be subtle or mediated slop, but it could also just be a learning experience, or a weird misunderstanding.

I can live with these kinds of interactions, and I'm sure we'll all get better at detecting and redirecting them. But I'm wondering if it's worth having a sentence covering them as out of scope.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting idea. Thanks for sharing what you've seen. I'll think about how to incorporate that in further drafting.


Informally, contributions that fall short may be called "slop" and the behavior "vibecoding".