-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Add *be present* contribution policy #3936
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 1 commit
104ba3a
a01d532
793885a
a5e4927
5f576a5
f82d9ea
045c296
c408031
b89bb8a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So, this policy has not seen any activity for multiple weeks, and while I have privately asked TC to discuss whether this should be closed, I have not received any responses. I originally listed my thoughts on the update after careful deliberation, and they were deleted. I actually have shrunken down my original thoughts here since I'm going to add the meat of them in the other RFC. I'm going to double-down on the term spineless I used in my existing thread. I think that part of my issue with all the changes to this policy is that we had a perfectly good, nearly-unanimously approved policy that got watered down into something substantially less clear. The original MCP, labelled empower reviewers to reject burdensome PRs, has been itself been replaced with a burdensome PR. It doesn't really identify anything useful until the very end, where it lists examples of what violates the policy. I think that this policy ironically goes out to do what it seeks to prevent, which is wasting everyone's time. And similarly, what exactly is the goal for this policy? Will we have two policies on the site for conduct, where one is the actual code of conduct, and one is the vague "be present" policy that is mostly redundant to the code of conduct, except for a few points? Why not combine them, and make this RFC all about how they would be combined? I think it's a very good idea to potentially update the COC, but that should be part of any RFC doing so. Even the policy's own writer that it might potentially be better to offer as an actual LLM policy. So, why is this PR still open? What value does it add? What is the endgame? I have other things to say on the other RFC. But for this one, I'll keep it short: why is this still open? And why, despite my requests to talk about whether the be present policy should be retired, has the author been mysteriously absent? (Note: that last part is not meant to be taken seriously. People have lives and are explicitly advised to not spend them dealing with me. It was just funny to mention.) |
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,31 @@ | ||||||||||
| - Feature Name: N/A | ||||||||||
| - Start Date: 2026-03-13 | ||||||||||
| - RFC PR: [rust-lang/rfcs#3936](https://github.com/rust-lang/rfcs/pull/3936) | ||||||||||
| - Issue: [rust-lang/leadership-council#273](https://github.com/rust-lang/leadership-council/issues/273) | ||||||||||
|
|
||||||||||
| ## Summary | ||||||||||
| [summary]: #summary | ||||||||||
|
|
||||||||||
| We adopt a *no low-effort contributions* policy for the Rust Project. | ||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While I think this needs a further change to move fully away from the "low-effort" framing, I agree with the second paragraph of this suggestion, if we find we need an interim policy for the most egregious cases.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not a big fan of the low-effort framing, because some genuine contributions are low-effort, and it's fine - fixing a typo or a CI config can be low-effort to do, but when it's also low-effort to review, and the contribution is valuable, it's not an issue on its own. I think that we really don't have to reinvent the wheel here. rust-lang/leadership-council#273 and rust-lang/leadership-council#273 (comment) very helpfully categorize existing AI contribution policies. I think that the golden standard is the LLVM framing (https://llvm.org/docs/AIToolPolicy.html#extractive-contributions) about extractive contributions - if it takes way more time to review/understand something than the time you put into it, that is the problem. We should still call out that everyone should own and understand what they submit, and that there has to be a human in the loop, but I'd suggest renaming the whole thing to be something like "No extractive contributions".
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The RFC defines what we mean by effort:
In my reading, this is similar to the LLVM standard. We're saying that the level of effort 1) must be no less than that required for review, 2) will vary therefore based on the cost of review and the task, and 3) is never less than being careful and thoughtful. (It then defines low-effort contributions as those that fail to satisfy that prong or the other two, accountability or compassion. This makes sense, to me, because, e.g., interacting with others without compassion is also low-effort.) While I do also like the "extractive" framing that LLVM uses, that theory too must be explained in order to make sense -- and I think it takes more to explain that theory than it does this one. So I'd probably ask, is your concern with this definition or with the term we're defining? I.e., if we said, "Contributions that fail to satisfy these criteria are $PLACEHOLDER contributions", then setting aside the value we choose for $PLACEHOLDER, does this otherwise sound workable to you?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As a follow-up to this, I've now set $PLACEHOLDER = be present. I think that this framing is a better fit for the strongly people- and community-oriented nature of our project. The "extractive" framing, I feel, is a bit more transactional. I also think the metaphor is slightly off-target. The word "extractive" suggests that others are getting something of value out of us (and capturing it themselves), but for the most part, with these useless PRs, people are just wasting both our time and their own. If we did want to go with a more transactional framing, I'd probably just suggest to call these contributions "negative-EV". But I think it's better to say that we expect the contributor to be present (as defined and explained in the policy). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @traviscross my personal goals for this document:
(My contribution here is limited because of lack of time...)
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While I don't agree that "extractive" fundamentally has a transactional connotation, and that there are uses where it simply means "to remove something" without the purpose being keeping that which is extracted, such as a tooth extraction, I can see how its easy to draw that connection regardless. As an alternative, perhaps "consumptive" would be a better way to frame it? Something that implies that maintainer effort is being wasted by the contributor in question. Also, could you explain what you mean by "negative-EV"? The only disambiguation I can think of is effort value because apparently I've played too much pokemon recently.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Negative expected value — apologies; I should have spelled it out. In an actual proposal, if we went that way, I would spell it "negative expected value" rather than using the initialism. I.e., a PR with negative expected value is one where we expect the cost of review to exceed the value that reviewing it will produce. |
||||||||||
|
|
||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Some basic background would probably be useful here. Basically laying out the increasing trend in automated PRs, the project's shared goals, and the fact that there is no overall consensus on the best way to approach LLM usage. I can take a stab later.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @nikomatsakis you might be able to recycle some of the text/links from the opening issue comment of rust-lang/leadership-council#273 (comment) (see the Motivation, Prior Art, etc. sections)
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agreed, I wanted to suggest an additional axiom, possibly associated with the lets start with common ground axiom. This suggestion feels aligned with what I had in mind: "acknowledge conflict: This policy should acknowledge where disagreements exist to set the stage for discussions navigating them and to ensure we're not avoiding conflicts that needs to be resolved." |
||||||||||
| ## Contribution policy | ||||||||||
|
|
||||||||||
| *Contributions* refer to pull requests, issues, proposals, and comments. | ||||||||||
|
|
||||||||||
| - **Effort**: If you're not putting in the same level of effort a maintainer will have to put in to review it, then you're not adding value. This level varies by the task and by the cost of review, but it's never less than being careful and thoughtful. | ||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that the focus on effort is incredibly misplaced, because it tries to encapsulate the nuance I mentioned earlier on LLMs (resource-intensive, minimal input from a contributor, maximum work for maintainers) as "effort" when this is simply not the case. I would argue that fixing a typo in the docs, or removing ableist words like "stupid" and "crazy" from the docs are contributions that require more work from maintainers than contributors. That doesn't mean they shouldn't be made. Ultimately, the goal is to ensure that contributors and maintainers work together, and the desire is to make sure that someone is ultimately willing to do the work on their own part. For example:
Here's another example I'll show, again from one of my own contributions. I tried to push forward stabilising the None of this nuance is conveyed in this description at all. In fact, the lack of description feels almost intentional: trying to own up what "effort" means, means actually limiting LLM contributions in a meaningful way, which a lot of folks in the project have been trying to avoid. I'm tired of avoiding the issue. I'd been meaning to push harder for an actual policy to be written, and it feels like the force at which this RFC has been push is pushing my hand, and the hands of other people who have wanted a policy, to rush such a policy harder. I doubt this is your intent. But again, it goes against everything this policy should stand for: working together. The fact that this policy seemingly was not done in that way feels telling.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It's a reasonable point, I think, that maybe we should take it more slowly. Maybe it's OK to leave things, for now, to the moderators who "know it when they see it". Maybe we don't need the council to adopt a policy immediately. In other places I've suggested similar things may be true. On the other hand, it's also possible that adopting some policy may help us in avoiding or moving on from the kind of upheavals that happened over the weekend. Reasonable people have made that point too. This proposal is acting on that second theory: that adopting some policy -- that encodes the intersection of the principles we share -- might help us to move forward. (As also mentioned in #3936 (comment).)
I think this is covered by the compassion point in the policy. Seeking to understand the situation and then stopping if appropriate was showing compassion. That's what we're asking.
In my view, it's easy to underestimate the amount of effort it takes a contributor to make even a seemingly-small change in a helpful way. On the Reference, e.g., if someone puts in the effort to fix minor errors correctly -- e.g., by first reading our dev guide, ensuring carefully that each change is actually correct, writing a clear PR description, etc., then the effort we need to spend as maintainers is small to merge it. But if someone does a low-effort job, e.g., by having commits that mix some correct changes with others that aren't correct, "fixing" things in a way that doesn't follow our conventions, not explaining what the person was trying to do, etc., then these PRs can take us as maintainers a lot of effort to figure out. Even for small changes, in my experience, effort (as defined below) by the contributor is what makes the difference between whether or not the contribution is valuable.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Honestly, I feel like my main reply to what you said is reiterating what I said in a separate thread:
As you've mentioned, there's a possible way to interpret these policies where we both completely agree on what's been said. My main issue is that this level of interpretation isn't really suitable for an RFC, and the goal should be making something that does not require this much extra interpretation. Although I guess that the main point is the disagreement on whether a stopgap RFC is needed or not, which is kind of a separate argument.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This is, indeed, a key point.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
FWIW I do not believe this policy as written would have done anything to prevent the situation with Jack's blog post. The issue with that post was not whether or not his blog post was a low effort contribution. Jack and the vision doc team members spent many hours on that blog post and the underlying work. I have not seen any complaints about the content itself, only with how it was presented as a result of LLM usage. The issue is that the usage of an LLM, which he later confirmed, was perceivable at all despite attempts to ensure the blog post came across with his own voice. Knowing that the blog post was written with an LLM caused an immediate negative emotional response for many readers and more than a few assumed the blog post was entirely LLM slop and avoided reading it entirely. I can't fault people for this reaction, its an extremely reasonable heuristic in the age of AI slop where there are massive effort disparities between creating slop and determining whether some content is slop. Sometimes the only defense against this DDoS of attention is to make a quick risk assessment and refuse to engage when the risk passes a certain threshold. In this case many of the project members who reacted negatively to the post and wanted it retracted failed to include the source of the blog post in their risk assessment, even with LLM involvement I think the risk of Jack producing a low effort blog post was low. This policy is not going to address situations when the assessment of whether or not something is low effort are wrong. Its just going to shift the blame onto the reviewer for not putting in the effort of engaging with potential slop when they get it wrong. This risks putting reviewers in a situation where they must deeply review all content that seems like slop so they can prove exactly why its slop, a job that nobody wants. The original policy by @jieyouxu did a much better job at making it clear that even looking like slop is a sufficient reason for reviewers to refuse to engage. Without clearer guidelines on where and when LLM usage is acceptable and broad trust across the project that other project members will and are adhering to this policy we're going to keep having problems like this.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I think that's right. At the same time I see the wording in the RFC as serving two other needs:
As a practical matter it is still important to close the gap between "here's how we think about effort in contributions" and "here's how we concretely protect ourselves against DDoS" (to use your term). Having a big-picture framing is useful as we establish and update more tactical policies. |
||||||||||
| - **Accountability**: We hold you responsible for everything you send to us. We expect you to understand it and be able to explain it. We expect you to check it carefully. Respect our time. | ||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’m comparing this phrasing to the things that already exist in the etiquette section on the “How to start contributing” page of
I’m not calling this out as thus being redundant or anything, or asking for it to be the same extended length as the linked (and quoted) etiquette is, but only because I’m noticing the other document made things much less ambiguous with the demand to “check your work”. The current phrasing here instead accountability requirements for “everything you send to us” and says we “expect you to understand it” and “expect you to check it carefully”. It can be easily mis-read to also imply “feel free to send us something AI generated” as long as you “understand it” and “check it”. IMHO in this reading, one would easily be missing the crucial step of making someone that’s truly yours in the first place. Footnotes
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is interesting the similarities. I had not consulted the other document. But it doesn't surprise me -- I had intended to put forward a set of principles that I believed to already be widely accepted within the Project. The document on the Forge reads to me as a set of guidelines -- a good set. What I think we want here are a set of policies. That's a bit different of a thing, and we do have to state policies more narrowly and crisply than we would state guidelines. Not following all of our guidelines might be OK -- they're to be read as suggestions, even if strong ones. Not following even one of our policies, though, is a violation.
Yes, I agree that's a risk. And I agree that we want people to make the contributions their own regardless of the tools used. But as a policy matter, I feel that we need to break this into parts. The parts that occur to me are:
That's what I tried to encode.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For context: When I had first come to reading this text in the context of the original LC-repo PR, where we were explicitly talking about AI policy, I first read this sentence actually as being predominantly about “carefully check the things you submit” as an extra step for AI-powered contributions specifically and my initial reaction was kind-of along the lines of “Why is this in the generally applicable section if it’s assuming use of AI, from the ‘check it carefully’ wording?” though now I see that this isn’t necessarily actually what’s meant here. (I’m saying “this isn’t actually what’s meant here” here, or that it would be a “mis-reading”, since I’m convinced - and I have previously argued in Zulip discussions - that ‘check it carefully’ cannot possibly be sufficient for a requirement to make AI-generated things truly your own.1 Whereas checking things that are already your own work should be useful, since if you made it you ought to be able to actually be able to effectively check it.) Footnotes
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe the “own work” could just be added 🤔
Suggested change
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just remembered, I’m also not happy with the phrasing of
or
further down in the document then. I overlooked these earlier, I dislike how does clearly pick up the previous “check it carefully” wording, even though that wording came up in a section of requirements for all contributions. It’s probably pretty hard to pull off, but I think it’s very important that the requirement to check your work carefully should IMHO not be made (or too strongly implied to be), in any way a defining feature or sufficient condition for “acceptable AI use”. I know the document “This list is nonexhaustive.” does aim to make it technically so that this isn’t the case; but I believe we could manage to represent the degree of current consensus much more clearly than this current text. (I have no concrete ideas for (ideally minimal) changes that could improve this yet, but I’ll try to think about this.)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So...I don't quite agree @traviscross with the way you characterized the stats I gathered. For reference, I found that moderation action was taking place on about 2-4 PRs per week. I agree that's not like an avalanche but it's also a consistent stream and I can imagine that the emotional impact from that is higher than the raw number makes it sound. But also, I think that I'm only counting a slice of the problem, for example, it doesn't include issue reports like @lqd just mentioned. My honest opinion here? I think that the problem is real. I think that as a total percentage of activity it's probably lower than it feels like it is -- but in part that's because the frustration of dealing with some of these issues is so disproportionately high, particularly given the ethical, societal, and climate concerns that are on peoples' minds. I should say that my goal in gathering stats wasn't to try and assess whether the problem was real or not, I think it's enough to say that it's a source of burnout, however large it is-- but rather to try and figure out whether we could take some automated steps to reduce the amount of manual work required for mods and people triaging issues.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do want to say something-- hmm-- I'm not sure how to phrase this. I hope it's not offending people, I'm just trying to be real. I think that part of the work people put into "sniffing out" AI is unnecessary and makes the problem feel worse than it has to. I see in @apiraino's comment for example, "Don't include the time project members take to report something that smells fishy", I feel like... there's something here that needs further digging. Like, maybe disclosure would help, so there's no guesswork required. Maybe just relaxing a bit about what tools someone is using would help. I think if we worry about people hiding from us what tools they use that's probably on us to make it clear what our rules are and what acceptable use looks like. In saying this I'm aware I'm not on the front line of doing a lot of reviewing. I'm trying to put my money where my mouth is for a-mir-formality, maybe I ought to try and pick up more of the compiler review queue, for that matter.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like that, not having to analyze deeply, but instead figuring out how we can avoid that effort entirely. I think we can do two things maybe:
This doesn't mean we allow AI contribs or forbid them. This is not a path to getting your AI contrib merged. This is a way to not get ppl annoyed at you for various reasons ppl would get annoyed.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like those ideas as well, @oli-obk. I've been thinking about how to deal with the idea of "not everybody wants to review AI-generated output". It seems clear from the rust project perspectives that there is a segment that would feel morally compromised by doing so and I want to respect that without forcing everyone to agree with it.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Thanks for taking a charitable interpretation, @apiraino, and for expanding. Filling in more details, as you did, is exactly the kind of engagement I was hoping to see here. The fuller picture you paint — the time spent separating wheat from chaff, the internal discussions, the appeal emails, the back and forth with GitHub — helps build the kind of shared understanding I think we need.
For what it's worth, @nikomatsakis, I think we're in agreement. You said it wasn't an avalanche — I said it wasn't an avalanche (though, to all, please read on — in saying that, I don't mean to diminish it one bit). You called it a stream — I described it as an unrelenting drip (by which, in context, I mean the kind that destroys property, or water torture). You said it may have emotional impact higher than the raw numbers — I agree — I had said:
In that, I'm suggesting that regardless of what the numbers are, the experience is wearing on people. Clearly we must take that seriously. I understand that people are angry. I understand that many find dealing with the garbage that's been coming in triggering. I understand why, in this context, questioning a characterization on a quantitative basis might seem beside the point. With hindsight, I can see how asking questions directly might have been more effective. That is the benefit of hindsight. But whether or not I raised it as well as I might have, I did and do feel this is an important point to understand. People have been characterizing it similarly in other places. The numbers that Niko found — 2-4/week — were therefore smaller than what I was expecting. I don't think the exact words matter, but I think our shared understanding does. Seeing the characterization, and then seeing the numbers, I get curious. Maybe the numbers are wrong. Maybe, as @apiraino suggested, they don't tell the whole story. Maybe the numbers don't matter. In any case, I believe we do well by digging into it and having that discussion to build the shared understanding we need. Many of us had the opportunity to talk in a more relaxed setting today, and that proved healthy. I'm grateful we were able to do that, and I'm grateful to those who were able to join us. This situation is creating a lot of understandable raw anger, frustration, exhaustion, and more, and it seems clear that's leaking into these threads. The more we can see each other as people who are all trying to do our best for the Rust Project, the better that will be for all of us and for the Project. That's the only way, I think, that we'll have some hope of coming out of this crisis stronger. |
||||||||||
| - **Compassion**: In taking the time to answer your questions and review what you've proposed, we're making an investment in you. We want to make you better able to help us. Keep that in mind when we're offering feedback. Listen carefully. Reflect on it. Reply to us compassionately. We're trying to help; making our maintainers feel bad about that leads to burnout. | ||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’m not a huge fan of the wording here. It’s talking about compassion, but
is IMO not the most compassionate way of addressing someone in and by itself. Perhaps leaning too much into the story of relating everything to AI, but if I was setting up an LLM with a personalized prompt, something like
can almost be more fitting there. It reads like a list of stern commands towards a subordinate; not exactly the tone for a non-hierarchical, collaborative, largely volunteer-based open-source project like ours. I know I’m a bit nitpicky about wording, but it’s very important in my opinion, since moderators would more easily/happily link to an impeccable document without any second-guessing or need to provide additional context or clarification, and every time we can link to something that already exists for explanation, something that feels suitable and appropriate to many situation well that’s when we are really saving time and effort.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Interesting. It doesn't strike me negatively in that way, even now as I sit here trying to force it to do so. It reads to me more in the way a meditation might -- a set of maxims, aphorisms, tenets, reflections, affirmations, proverbs, etc. E.g., "treat others as you would want to be treated." That sort of thing. It's set in imperative mood, just as those are. I wrote it this way to be concise -- that's something people had mentioned as desirable in discussions. That's probably why other maxims tend to be written this way too. I could take it out of imperative mood, but it would make it longer and less crisp. (And also, this is a policy, so we are actually telling people how we expect them to act. It is, in fact, a command.)
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It doesn’t strike me negatively - personally - either, but I’ve witnessed so many cases of bad reactions to unfortunately worded and/or misunderstood statements, or demands, in online interactions that I’m cautious about such things. (Ultimately the proof would only be in whether or not other people, who weren’t the author, might come in and report on any such negative perceptions.)
That’s good that you’re bringing this point up, because I hadn’t considered it yet: as far as I understand, “policies” would often rather describe (and hence prescribe for those that want to join and follow along) mainly the behavior of the project or teams on certain matters. A moderation policy for example, would describe how we - the Rust project (specifically moderation team) - is moderating things. A privacy policy would explain to a customer the information on the rules and procedures and such of how the company handles your data. The original policy proposal hence also touched a lot more on the specific behavior of compiler team, moderators, etc. The document now still does it a little bit, but only really in one single line starting I’m not sure though whether this detail is relevant, or just my personal limited exposure to things called “policies” but actually being documents of rules for the user.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The comparable to the ones you mention would be an Acceptable Use Policy.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've now revised these items to use less imperative mood by leaning more heavily on infinitives. It softens it a bit.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I don't agree with this framing and object to the current phrasing. I think this is true but its not the core goal and it comes off as self serving. I personally do not expect every contributor to come back and take the investment of effort we put into them and reapply those skills to the project. I am perfectly happy if they go and use those skills in their personal projects or professional environment. What matters to me is that people learn and grow and that our investment in our contributors contributes to the health and development of the open source community at large.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, understand. Thanks for raising that. For my part, if it contributes to the health and development of the OSS community at large, I still consider that helping us. This is one place where a longer document could be more explicit about saying this sort of thing. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
^ This 💯 Though I am personally reading the "investment" part that Travis suggests not with a capitalistic framing (i.e. a mere "ROE") but more with the "no extractive contributions" LLVM framing. We can wax philosophical but I think this hints at the core of our social contract: we prefer contributors learning and internalizing how the Rust project work rather than someone learning how to orchestrate an LLM to send us patches. The difference may seem subtle but in the former case we have a person growing up, in the latter we have a ... uh ... dog trainer teaching Spotty how to jump through circles.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've revised this item to say:
I.e., I've removed the bit about helping us and added an explicit note about how, in working with contributors, we're working to invest in the open source ecosystem.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Based on further feedback, I've redrafted this now to:
|
||||||||||
|
|
||||||||||
| Contributions that fail to satisfy these criteria are *low-effort contributions*. | ||||||||||
|
|
||||||||||
| Maintainers and moderators are **not** required to put effort into explaining rejection of low-effort contributions. | ||||||||||
|
|
||||||||||
| ## Examples of low-effort contributions | ||||||||||
|
|
||||||||||
| - Fully AI-generated and not carefully self-reviewed contributions are *low-effort*. | ||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Carefully self-reviewed" can be seen by some people by simply reading the output and nodding their head "yes, this looks like Rust code". We should be more explicit here and ban fully AI-generated contributions. It is very unlikely that even a trusted and skilled Rust contributor could create a contribution that is fully AI generated without wanting to change at least something in it, and the sign of understanding something is that you can modify it in a way that it improves, or at least still works, so being able to at least modify the output of AI in a reasonable way is something that partially proves that you understand it, and can explain it to others, which is what we want.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here's my thinking. Two axes:
It would spill a lot more words to try to distinguish these from other cases, and then it really would make this into a policy that is describing acceptable model use, so I went with a consistent framing that the key distinction, for now, is that the person is taking responsibility for what is submitted, as the rules say. To do that means at least carefully reviewing it, so that's what I put in the examples. (I previously gave this reply in Zulip in
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm working on a more comprehensive RFC that may address this in more detail, but for the moment, I've updated this item to say: Hopefully that's a bit more clear. |
||||||||||
| - These are informally known as "slop" or as "vibecoding". | ||||||||||
|
clarfonthey marked this conversation as resolved.
Outdated
|
||||||||||
| - These include those produced by [OpenClaw](https://openclaw.ai/) or similar tools. | ||||||||||
|
steffahn marked this conversation as resolved.
Outdated
|
||||||||||
| - Code, PR descriptions, bug reports, proposals, and comments are all low-effort if fully-generated and not carefully self-reviewed. | ||||||||||
| - Feeding maintainer questions into AI tools and then proxying the output directly back to the reviewer is *low-effort* and lacks *compassion*. | ||||||||||
|
|
||||||||||
| This list is nonexhaustive. | ||||||||||
|
clarfonthey marked this conversation as resolved.
Outdated
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In terms of prior art, the RFC should link to:
Because essentially this RFC would subsume all of these and create something official that we could link to. |
||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, this policy disappoints me. Not because the general idea isn't good, but because it feels… spineless. It's trying to generalise something that can't easily be generalised.
Compare to an existing moderation team policy which has also been accepted: https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#spam-policy
The reason why I label this as spineless is because it seems desperate to try and classify "slop" contributions as something that isn't descriptive enough: "low effort."
Here's the reality: LLM contributions are uniquely positioned as something that is resource-intensive, requires minimal input from a contributor, and creates maximum work for maintainers. They cannot be generalised without losing specificity and should not be.
Overall, the team has had an overwhelmingly negative response toward LLMs and despite this, a few extremely LLM-positive folks have been resisting the urge to create a proper LLM policy, with the fear that we might reach too far. I think that choosing to just label this as "anti-low-effort" is doing a massive disservice to those of us who have already spoken in favour of a stricter LLM policy.
I'd been holding off on working more into this but the presence of this PR seems to be forcing the issue a little bit. We already have moderation policies to stem the flow of spam, and so, it seems more apt to hold off in favour of a stricter, more robust policy.
Right now, ironically, the "no low-effort contributions" policy seems to be, itself, a low-effort contribution. It doesn't follow the RFC template at all and has a number of paragraphs that can almost be counted on a singular hand. Compared to all the stuff folks have written on existing "no slop" contribution policies, it seems to completely ignore those in favour of a spineless policy.
Additionally, the desire to close the discussion as "too heated" even before a discussion has began, while also pushing to vote on it immediately, is incredibly telling. It gives the opinion that you don't trust the community at all and are expecting backlash… from whom? I am glad you chose to not close this, but am very disappointed.
I have more specific comments I'll add on specific points.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also putting this here for lack of a better place: I've opened a thread with some polls on Zulip to get a basic "vibe check" for how people feel about various AI policy implementations, and whether they feel they should apply. This is 100% just to get a feel for what kinds of policies people would be willing to accept, and is not meant to be a place to suggest what should be done, just to get a feel for what the shape of an ultimate policy might be. Although Zulip doesn't let you make votes private, I figure I should explicitly add that this thread is not a place to discuss anyone's views on the matter, and is purely for getting an aggregate view of the community's opinion specifically when it comes to policy.
#general > Vibecoding vibecheck
There is also the already-existing thread on project perspectives on AI, which has been orthogonal to any discussion of policy: #council > Project perspectives on AI
Also worth adding that, although the poll explicitly includes a policy on slop for completeness, it's unlikely that the final policy will allow "slop contributions" in any form, considering how much work so far has been done to quell spam against the rust-lang repository and how the community consensus, even among the most diehard LLM fans, has been that you shouldn't just spam the repo with changes that you haven't reviewed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a reasonable point, I think, that maybe we should take it more slowly. Maybe it's OK to leave things, for now, to the moderators who "know it when they see it". Maybe we don't need the council to adopt a policy immediately. In other places I've suggested similar things may be true.
On the other hand, it's also possible that adopting some policy may help us in avoiding or moving on from the kind of upheavals that happened over the weekend. Reasonable people have made that point too.
This proposal is acting on that second theory: that adopting some policy -- that encodes the intersection of the principles we share -- might help us to move forward.
(As also mentioned in #3936 (comment).)
While I think your characterization to have unnecessary bite, it's true that my goal in drafting this policy was to write down the principles on which, I believe, we already agree. The idea is, by starting with that intersection and writing down in plain language what we want and don't want -- things we may have taken for granted before -- that maybe that helps us to move forward.
You've made a number of harsh criticisms -- in this subthread and others -- but I'm curious. Given that there are a wide spectrum of views on this, and given that there are many practical considerations (see, e.g., the various Zulip threads where these have been enumerated), how do you see this ending? You would propose to start, I imagine, with a less "spineless" policy proposal, but many of the options for that would struggle to gain consensus. So what do we do then? Do you think there's not some risk in not adopting some policy now, given what occurred over the weekend?
On this:
(See also #3936 (comment).)
I'd encourage you again to reflect on the request at the top of this thread:
Important
This is a difficult issue. Remember that we're all colleagues trying to do the best for Rust and that we all need to continue to work together. Please keep in mind that people have many different experiences and earnestly hold many different views. Please focus on assuming good intent, being curious, and acting compassionately.
I didn't lock the PR (see #3936 (comment)), as had been proposed (and people had proposed even more drastic measures), because I do want to have faith in the ability of our community to have a thoughtful, curious, and compassionate discussion. But we each must be careful -- we must demonstrate this ourselves -- or there is real risk that this faith will have proved misplaced.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Travis, I'm going to be honest, you repeatedly telling me to reread the request at the top of the thread is feeling less polite and more… aggressive, and I'd like you to consider your actions as well. Telling someone to watch what they say is not a neutral act, and in your case, it feels almost like a threat. I know very well what the rules are, and have even been repeatedly reminded because I have overstepped in cases, and should not here. I'm trying to be as careful as possible in my words here without compromising on what I'd like to say.
As far as endgame goes, genuinely, I would rather have a discussion first. Like, if nothing else, that's the point I want to emphasise first.
So far, a lot of discussion has gone into how people feel about AI, and summarising what people think about AI, but absolutely none of it has gone into policy discussion, effectively, minus the stopgap spam policies. (Which, for the record, have been widely received with minimal disagreements. Again, I think that stopgap policies can exist without the RFC process.)
Right now, this feels like an attempt to end discussion, even though I know that's not the intent. Having discussions about policy is inherently exhausting and I sort of treat the RFC process as a sort of endgame status: this is the last time where we have a community discussion on the matter unless there are major unaddressed points. To me, this RFC says: there is no LLM policy, only a "low-effort contribution" policy, and that's final, regardless of whether that's actually what you're trying to say.
Like, I hope that we can all agree that discussions about policy, and politics, (fun fact, in French, these are the same word), are exhausting. My goal would be to have as many informal discussions about policy as possible before the formal policy discussion (that is, an RFC) to minimise the amount of public political discussions need to be had about the subject. Because, ultimately, those are exhausting.
Like, to me, it's exhausting to even have to clarify to you that I'm not here to fight anyone; I'm here to have a discussion. And it feels like at every step of the way, discussion is being strongly discouraged. Hell, the reason why I felt the need to comment on your repeated insistence on quoting this one bit from the thread is because it feels like you're explicitly telling me to stop talking here.
Ultimately, I feel that LLMs are toxic to the entire world, and that any usage of them is a net negative on the world, regardless of how useful they are. But I do not think that simply stating this is helpful here because all it does is make people turn off their brain and want to argue about it. The reason why I clarify this here now is only to point out how, despite my ethical opinion being dead-set on absolutely no LLM usage whatsoever, I would rather have a policy discussion to see what is reasonable to ask for and what's not, rather than to quell any discussion on the matter.
Right now, starting an RFC without having that discussion feels like further attempts to silence discussion, even though the RFC process is ironically all about discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I understand how it could feel this way. But you've opened many threads. Each has a separate risk of spiraling if we're not careful. While you're seeing this message that we should take care repeated, the others I'm worried about (new people coming into this PR and responding to remarks that can be read as inflammatory) may not see this message in time unless it's repeated where such remarks are happening.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this reply. This is helpful in understanding where you're coming from on this.
I had originally proposed to do this policy in a non-RFC form (in rust-lang/leadership-council#273 (comment)); it momentarily moved into FCP. If I hear you correctly, you interpret the fact of us doing this now in an RFC form as making it more final. I can see why it would seem that way. But I don't think it does, and that wasn't the goal in doing this as an RFC. There was simply a worry that, procedurally, we should not a adopt any policy of this nature for the Project without it being in the form of an RFC, because the RFC process is how we ensure the proposal gets sufficient visibility within the Project and that we're able to collect wide feedback from team members. So I took the same policy that had been proposed in non-RFC form and proposed it here as an RFC instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's equally fair to try and include the message in each thread to ensure that others see it as well. I think it's worth clarifying that you're more than welcome to clarify this in your comments themselves; saying "for everyone reading this thread, I just want to reiterate this" would go a long way, but in all the comments, you were directly addressing me.
Making many threads was probably a bad idea. Especially considering how, when one thread was deleted for mentioning an off-limits topic (I wasn't aware this was officially decided beforehand), I essentially realised that no points of value were really lost.
I'm mostly just repeating myself here, and I wouldn't mind if you ultimately decided to mark my other threads as resolved for that reason. I'm just very defensive, and rightfully so, for reasons that if I repeated here, well, I'd be repeating myself. Bureaucratic nuances aren't really something to get into passionate debate about, and that's effectively what's happening here.
To be clear, I would be much more relieved if all this discussion were so boring. Boring discussions don't have high stakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amen! It's fine if folks use LLMs on their own time, but even if an LLM edited document winds up reasonable, then it'll still be improved by another human rewrite pass.