Published on July 22, 2025 8:36 PM GMTRepo: https://github.com/DavidUdell/sparse_circuit_discoveryTL;DR: A SPAR project from a while back. A replication of an unsupervised circuit discovery algorithm in GPT-2-small, with a negative result.Thanks to Justis Mills for draft feedback and to Neuronpedia for interpretability data.IntroductionI (David) first heard about sparse autoencoders at a Bay Area party. I had been talking about how activation additions give us a map from expressions in natural language over to model activations. I said that what I really wanted, though, was the inverse map: the map going from activation vectors over to their natural language content. And, apparently, here it was: sparse autoencoders!The field of mechanistic interpretability quickly became confident that sparse autoencoder features were the right ontology for understanding model internals. At the high point of the hype, you may recall, Anthropic declared that interpretability had been reduced to "an engineering problem."Frustratingly, though, the only naturalistic model behavior that had been concretely explained at that point was indirect object identification, dating from back before the sparse autoencoder revolution! In a world where sparse autoencoders just solved it, I would expect everyone and their dog to be sharing pseudocode fully explaining pieces of LLM behavior. On the sparse autoencoder circuits front, here was a representative proffered explanatory circuit at that time:I cannot directly interpret this image as pseudocode, is the thing. Note that it's definitely true that the features highlighted here are often quite relevant and suggestive. But the edges between features don't really add to my understanding of what is going on. I get the feeling that the crisp mechanistic understanding is tantalizingly close here... but still out of reach.So it struck me as strange that so much capital was being put into tweaking and refining the sparse autoencoder architecture, and not into leveraging existing autoencoders to explain stuff—I thought the whole point was that we had the right ontology for explanations already in our hands! Maybe not full explanations of every model behavior, but full explanations of some model behaviors nonetheless.The theory of change that this then prompted for our SPAR project was: show how any naturalistic transformer behavior can be concretely explained with sparse autoencoder circuit discovery. Alternatively, show how concrete sparse autoencoder circuits fail as full explanations. So that's what we did here: we replicated the then-new circuit discovery algorithm, ironed out a few bugs, and then tried to mechanistically explain how some GPT-2 forward passes worked. It did not work out, in the end.Our bottom-line conclusion, driven by our datapoint, is that the localistic approximations of vanilla sparse autoencoders cannot be strung together into fully explanatory circuits. Rather, more global, layer-crossing approximations of the model's internals are probably what is needed to get working explanatory circuits.Gradient-Based Unsupervised Circuit DiscoveryAn explanation of the circuit discovery algorithm from Marks et al. (2024) that we replicate.We want to be able to explain a model's behavior in any arbitrary forward pass by highlighting something in its internals. Concretely, we want to pinpoint a circuit that was responsible for that output in that context. That circuit will be a set of sparse autoencoder features and their causal relations, leading to the logit for the token being upweighted and other-token logits being downweighted.The most basic idea for capturing causality in mechanistic interpretability is: take derivatives. If you want to know how a scalar x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}.mjx-numerator {display: block; text-align: center}.mjx-denominator {display: block; text-align: center}.MJXc-stacked {height: 0; position: relative}.MJXc-stacked > * {position: absolute}.MJXc-bevelled > * {display: inline-block}.mjx-stack {display: inline-block}.mjx-op {display: block}.mjx-under {display: table-cell}.mjx-over {display: block}.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-stack > .mjx-sup {display: block}.mjx-stack > .mjx-sub {display: block}.mjx-prestack > .mjx-presup {display: block}.mjx-prestack > .mjx-presub {display: block}.mjx-delim-h > .mjx-char {display: inline-block}.mjx-surd {vertical-align: top}.mjx-surd + .mjx-box {display: inline-flex}.mjx-mphantom * {visibility: hidden}.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}.mjx-annotation-xml {line-height: normal}.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}.mjx-mtr {display: table-row}.mjx-mlabeledtr {display: table-row}.mjx-mtd {display: table-cell; text-align: center}.mjx-label {display: table-row}.mjx-box {display: inline-block}.mjx-block {display: block}.mjx-span {display: inline}.mjx-char {display: block; white-space: pre}.mjx-itable {display: inline-table; width: auto}.mjx-row {display: table-row}.mjx-cell {display: table-cell}.mjx-table {display: table; width: 100%}.mjx-line {display: block; height: 0}.mjx-strut {width: 0; padding-top: 1em}.mjx-vsize {width: 0}.MJXc-space1 {margin-left: .167em}.MJXc-space2 {margin-left: .222em}.MJXc-space3 {margin-left: .278em}.mjx-test.mjx-test-display {display: table!important}.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}.mjx-test.mjx-test-default {display: block!important; clear: both}.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} affected a scalar y, well, take the derivative of y with respect to x. If you want to know how anything affected the loss, of course, take the derivative of the loss with respect to it. But the strategy is fully general: if you want to know how one activation affected another activation, take the derivative of the one with respect to the other. And if you want to know how your sparse autoencoder features all affect one another, just take derivatives among them.The idea of the sparse autoencoders themselves is less immediately obvious. Once you have them, though, they are a differentiable quantity that automatically lend themselves to causal approximation. A natural next step after you get apparently comprehensible sparse autoencoder features is to see whether their causal relationships match (or fail to match) your understanding of feature contents.On the implementation side, derivatives batch well in PyTorch. In a small number of backward passes, we can estimate how sparse autoencoder features causally interrelate.More precisely, let L be the loss (concretely: CrossEntropy). Let f be a sparse-autoencoder feature activation, a scalar, with a suppressed index for its model sublayer i:f∈→f∈RdautoencoderA node nf for a feature f is given byf⋅∂L∂fInterpret a node as a feature's individual contribution to the loss.An edge between two feature activations fi and fj is given byfi⋅∂fj∂nfiInterpret an edge as the i feature's individual contribution to the loss by way of affecting the j feature.Collect the absolutely largest nodes at every sublayer i in a forward pass. Compute the edges between all neighboring nodes in the collection. Finally, correct for any double-counted causal effects between the nodes.[1]The graph of edges you have now (or, precisely, the subset of that graph that paths to the logits) is a gradient approximation of the actual causal relationships between sparse autoencoder features. It is, putatively, an explanation of that forward pass.Sanity CheckA spot check of the new method's validity.Last time, we looked at the prompt Copyright (C). GPT-2-small knows that that final closing parenthesis should be coming: the model puts the majority of its probability on that token. How does it know to do that?Well, last time, we saw persistent "causal threads" in GPT-2-small's residual stream over that forward pass. Certain sparse autoencoder features can be seen causally propagating themselves across a forward pass, just preserving their own meaning for later layers. The features that we observe doing this specifically look like hypotheses about what that model's next token will be. For example, for the Copyright (C prompt there are a couple of causal threads about word completions for the capital letter C. There is a causal thread for tracking that you're inside an open parenthetical, and one for acronym completions.Run that same forward pass through this new gradient-based method, using the old residual autoencoders only. This method also picks out at least one of those same causal threads—a C-words one—as its top highlight.The graph is definitely hard to read. Just round it off to: we're passing this sanity check, as we're at least recovering a common structure with both methods.ResultsOkay, here's our argument against the gradient-based method. If a circuit is explanatory, when you walk back through its graph, each node and edge adds to the story. In particular, the very final nodes should be explanatory; if they aren't, that screens off the rest of the graph being explanatory.Below are the single top feature nodes for the token following each prompt. These nodes are then the main interpretable proximal cause for the model's logits, according to this sparse autoencoder circuit discovery framework. We are interpreting each node using its bottom-most blue row of boxes. That blue row represents the tokens that were most promoted by the feature in its forward pass. (Importantly, it is causal, not correlational, interpretability data.) Focus on that bottom-most blue row of logit boxes promoted.[2]Top Proximal Causes1. Copyright (CTop CompletionsTokenProbability)82%VS1%AL1%IR0%)(0%Closing parentheses; forms of "be" 2. The biggest name in basketball is MichaelTop CompletionsTokenProbability Jordan81% Carter4% Kidd3% Be2% Malone1%Various Michael last names 3. Make America GreatTop CompletionsTokenProbability Again95% again3%"0%Again0%."0%Capitalized transition words 4. To be or not to be, that is theTop CompletionsTokenProbability question6% way5% only3% nature3% point2%Question 5. Good morning starshine, the Earth saysTop CompletionsTokenProbability hello9% it7%,6%:4% that4%Code Causal Structure From All SublayersOf those five examples, only two seem to correctly call the model's top next token: the closing parentheses feature and the question feature. But, even when you condition on having an actually predictive proximal feature, the sublayer upstream edges and nodes flowing into it—plotting whose contributions is the whole point of this—are not illuminating.The edges from each upstream sublayer that most strongly affected the closing parentheses feature were:[Unclear meaning] [Unclear meaning] Casual exclamations The edges from each upstream sublayer that most strongly affected the question feature were:[Unclear meaning; same node as before] [Unclear meaning] Casual exclamations; same node as before When you ablate out a causal thread with a clear natural interpretation to it, the logit effects seem quite sensible.[3] Also, you can often scan over a causal graph that you get out of this method and cherry-pick a sensible, informative node activation: you can learn, e.g., that a particular attention layer seems to have been particularly counterfactual for that token.Our complaint is that you are really not getting mechanistic understanding of the reasons why the model is writing what it is into the residual stream. It was that "reasons why" that we were after here in the first place.ConclusionWe went into this project hoping to plug a research gap, and get out a concrete algorithmic explanation of what is going on in a naturalistic forward pass, given autoencoder features are taken to be primitives. We found that that didn't work with this algorithm.Relatively recently, Anthropic published work showing cross-layer transcoder circuit discovery that does work to that standard. They give the full cognitive algorithm that Claude is using for two-digit addition, e.g. Their result leads us to think that it is the "cross-layer-ness" of what they were doing that is really the special sauce there. If the autoencoder circuits we played with here are built out of local approximations to what the model is representing at various points in the forward pass, cross-layer transcoders are instead built out of global approximations. Our overall update is that the additional work of getting that global approximation is necessary to make circuits research work in naturalistic transformers.^Say that you have three nodes, A, B, and C, at model sublayers 1, 2, and 3, respectively. Because of the residual stream, model topology is such that, in a forward pass, causality goes as followsIf you want the value of the edge A→C, you cannot just compute the effect of node A on node C. You will also have to subtract off any effect due to the confounding path A→B→C.Say you now have four nodes, A, B, C, and D, at model sublayers last_resid_out, attn_out, mlp_out, and resid_out, respectively. Causality here goesThe edges A→B, B→C, and C→D can be simply computed without any needed corrections.The edge A→C has the confounding path A→B→C.The edge B→D has the confounding path B→C→D.The edge A→D has the confounding paths A→B→D, A→C→D, and A→B→C→D.^The topmost piece of interpretability data is a set of sequences for which the feature most activated, shaded accordingly. It is correlational data.Red rows of logit boxes are just the opposite of the blue rows: they are the logits that are causally most suppressed by the feature.The reason that there are sometimes multiple blue (and red) rows in a cell is that the rows are sourced from both local data and, when available, from Neuronpedia. The reason to focus on the bottom-most blue row is that that is the local data row, giving the causal effects of that feature for this particular forward pass.^This wasn't done suitably at scale last time, but validation results (ablation over a significant dataset subset) do clean up at scale.Discuss