Skip to content
#

transformer-lens

Here are 16 public repositories matching this topic...

Mechanistic interpretability study comparing modular addition and subtraction circuits in 1-layer attention-only transformers via activation patching, logit lens, SVD circuit analysis, Fourier feature analysis, and causal scrubbing across three training stages.

  • Updated May 1, 2026
  • Python

Improve this page

Add a description, image, and links to the transformer-lens topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the transformer-lens topic, visit your repo's landing page and select "manage topics."

Learn more