Fix backpropagation order in optimization tutorial#3676
Fix backpropagation order in optimization tutorial#3676patrocinio wants to merge 3 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3676
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 88abcce with merge base ccac77f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Reorder optimizer.zero_grad(), loss.backward(), and optimizer.step() to match the recommended best practice documented in the tutorial. Fixes pytorch#3507
7e70fc2 to
ed984c1
Compare
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Claude finished @sekyondaMeta's task in 49s —— View job Review of PR #3676
SummaryThis PR makes a one-line change to reorder the training loop steps in Before: # Backpropagation
loss.backward()
optimizer.step()
optimizer.zero_grad()After: # Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()ReviewThe change is correct and consistent with the tutorial's own prose. Lines 136-139 of the file explicitly document the three steps in this exact order:
The code prior to this PR had the order No functional issues. The reordering does not change the behavior of the training loop — gradients are still zeroed before each CI status: Lintrunner and link check pass. Build workflows are still in progress. Verdict: This is a clean, minimal fix. LGTM. |
Reorder optimizer.zero_grad(), loss.backward(), and optimizer.step() to match the recommended best practice documented in the tutorial.
Fixes #3507
Description
Checklist