Selective Generalization: Improving Capabilities While Maintaining Alignment

Wait 5 sec.

Published on July 16, 2025 9:25 PM GMTAriana Azarbal*, Matthew A. Clarke*, Jorio Cocola*, Cailley Factor*, and Alex Cloud.*Equal Contribution. This work was produced as part of the SPAR Spring 2025 cohort. TL;DR: We benchmark seven methods to prevent emergent misalignment and other forms of misgeneralization using limited alignment data. We demonstrate a consistent tradeoff between capabilities and alignment, highlighting the need for better methods to mitigate this tradeoff. Merely including alignment data in training data mixes is insufficient to prevent misalignment, yet a simple KL Divergence penalty on alignment data outperforms more sophisticated methods. Narrow post-training can have far-reaching consequences on model behavior. Some are desirable, whereas others may be harmful. We explore methods enabling selective generalization.IntroductionTraining to improve capabilities may cause undesired changes in model behavior. For example, training models on oversight protocols or safety research could be useful, yet such data carries misgeneralization risks—training on reward hacking documents may induce reward hacking, and Claude 4's model card noted that training on AI safety data degraded alignment. Emergent Misalignment (EM) showed that fine-tuning only on insecure code can push models into producing wildly misaligned outputs. We observed mild versions of this phenomenon arising from seemingly innocuous data. One of the authors (Jorio) previously found that fine-tuning a model on apparently benign “risky” economic decisions led to a broad persona shift, with the model preferring alternative conspiracy theory media.Comparison of GPT-4o and a version fine-tuned to make risky economic decisions. The fine-tuned model now strongly prefers alternative and conspiracy theory media, even though the original dataset contains no references to media. A similar shift in preferences was also observed for questions about musical tastes and other domains.In general, here's why we think valuable, seemingly harmless data could result in similar misgeneralization:Generalization is unpredictable beforehand. Out-of-context reasoning and emergent misalignment surprised researchers. We may be similarly surprised by other kinds of generalization.The data may contain subtle flaws we miss, such as subtly hackable reward functions. Preliminary evidence suggests reward hacking can generalize to nefarious behavior beyond the training environment.Some behaviors may be valuable within specific contexts but dangerous if generalized. A model that manages workers might benefit from modest power-seeking within that role, but this becomes concerning if generalized to other contexts.Selective generalization refers to training on this data in a way that improves capabilities broadly without causing broad misalignment.[1]Our ExperimentsWe study selective generalization in two experimental settings:Emergent misalignment from harmful medical advice.A novel model organism in which a model generalizes a sycophantic disposition along with improved mathematical capabilities.In both settings, we allow ourselves a limited proxy alignment dataset. Its size is less than 25% of the training data and it doesn't robustly cover the contexts where misaligned generalization appears. We do this to maximize how realistic our experiments are. Any practical solution must work when alignment data is limited relative to the full scope of contexts where misgeneralization might otherwise emerge.Formalizing the ObjectiveWe are given the following data distributions: T:the target task distribution being learned,e.g. a math dataset.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}.mjx-numerator {display: block; text-align: center}.mjx-denominator {display: block; text-align: center}.MJXc-stacked {height: 0; position: relative}.MJXc-stacked > * {position: absolute}.MJXc-bevelled > * {display: inline-block}.mjx-stack {display: inline-block}.mjx-op {display: block}.mjx-under {display: table-cell}.mjx-over {display: block}.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-stack > .mjx-sup {display: block}.mjx-stack > .mjx-sub {display: block}.mjx-prestack > .mjx-presup {display: block}.mjx-prestack > .mjx-presub {display: block}.mjx-delim-h > .mjx-char {display: inline-block}.mjx-surd {vertical-align: top}.mjx-surd + .mjx-box {display: inline-flex}.mjx-mphantom * {visibility: hidden}.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}.mjx-annotation-xml {line-height: normal}.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}.mjx-mtr {display: table-row}.mjx-mlabeledtr {display: table-row}.mjx-mtd {display: table-cell; text-align: center}.mjx-label {display: table-row}.mjx-box {display: inline-block}.mjx-block {display: block}.mjx-span {display: inline}.mjx-char {display: block; white-space: pre}.mjx-itable {display: inline-table; width: auto}.mjx-row {display: table-row}.mjx-cell {display: table-cell}.mjx-table {display: table; width: 100%}.mjx-line {display: block; height: 0}.mjx-strut {width: 0; padding-top: 1em}.mjx-vsize {width: 0}.MJXc-space1 {margin-left: .167em}.MJXc-space2 {margin-left: .222em}.MJXc-space3 {margin-left: .278em}.mjx-test.mjx-test-display {display: table!important}.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}.mjx-test.mjx-test-default {display: block!important; clear: both}.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}G:a distribution outside of the task domain, e.g. basic queries and other math tasks Each of these is divided into train/evaluation splits so that we have:Ttrain,Ttest,Gtrain,Gtest.We also assume the existence of 3 scoring functions, where higher score indicates better performance:stask,scapability,salignment.Our objective is the following: learn a model fθ, using Ttrain and Gtrain, such that we maximize: Task performance: stask(fθ,Ttest)Capability generalization:  scapability(fθ,Gtest)Aligned generalization:  salignment(fθ,Gtest)Note: in our Pareto Plot visualizations below, we collapse task performance and capability generalization onto one axis for readability, but we think the general distinction is important.Can we solve the problem just by training on our limited alignment data?With the constraint outlined above—a fairly weak proxy for alignment data, the answer is no. Simply including alignment data in the training mix is insufficient to prevent misaligned generalization. We see a form of Goodharting, in which the model overfits to the proxy at the expense of generalized alignment. Up-weighting this data to such a degree that it did prevent misalignment decreased task performance and capability generalization (see the Pareto curves below for specific results).Seven methods for selective generalizationFine-tuning on a mix of task data and alignment data (Mixed). This includes up-weighting loss on alignment data (Upweight).KL Divergence penalty to regularize the learned policy towards the initial policy on alignment data;Enforcing consistency on internal representations of alignment data between the reference and finetuned model (Representation Constraint);Projecting task gradients orthogonal to alignment gradients;Safe LoRA, projecting LoRA weight updates onto a "safety-aligned subspace" derived from the difference between base and aligned model weights;Direct Preference Optimization (DPO), using a loss function that implicitly learns the reward function from preference pairs, with paired alignment data concurrently with task training;O-LoRA, a method that mitigates catastrophic forgetting (in this case, of "alignment") by learning new tasks in orthogonal low-rank subspaces (omitted from plots due to poor performance).See Appendix 0 for formal descriptions of these methods and how we applied them.Plotting the capability-alignment tradeoffThe main outcome of our work is a set of plots showing the tradeoff between capabilities and alignment for each method we tested. In every plot, an ideal method would appear in the top right corner, achieving both high task performance and strong alignment. Preventing Emergent MisalignmentWe reproduced emergent misalignment in Qwen3-8B using the harmful medical advice dataset, and investigated how well we can train a model to exhibit misaligned behavior specifically within medical question contexts, without generalizing that misalignment to other domains (narrow misalignment).[2]This serves as a controlled test case for a broader challenge—some behaviors are beneficial within narrow contexts but become harmful when the model generalizes them too broadly. Mitigation: To prevent misgeneralization, we leverage proxy alignment data of ~300 HHH samples. We apply the methods described above, which make use of this data. Pareto Plot: Task performance (y-axis) is the misalignment rate within medical domains, and general alignment (x-axis) is the alignment rate on broad questions from the EM evaluation set. We'd like to maximize both these objectives. We evaluate both tasks with an LLM judge, which scores answers from 0-100. Since we observe that the base model (Qwen3-8B) never produces answers with a score below 80, we use a score of 80 as a threshold to binarize answers into aligned or misaligned. Then, Comparison of task performance (y) and general alignment (x) for various methods, leveraging the same alignment data (single seed). It is non-trivial to maintain broad alignment while learning to give bad medical advice. KL Divergence on alignment data traces the most desirable curve.   Observations:Basic mixed dataset fine-tuning on limited HHH data does not increase general alignment. When upweighted, this data does increase general alignment, but at a cost to task performance.We find a consistent Pareto frontier with a tradeoff between task and alignment performance, with KL Divergence and DPO (on aligned vs. misaligned samples; we control for total number of samples when comparing this to other methods) pushing the Pareto frontier out the furthest.SafeLoRA underperforms other methods, though it has the advantage of being able to be applied post-hoc.Mixed dataset fine-tuning in this setting actually slightly decreases general alignment (although other alignment proxies don't have this effect; see Appendix). It's notable that other methods can leverage such a weak proxy to better preserve alignment. For the same bad medical advice dataset, it was independently found that a KL penalty is more effective than mixed finetuning at producing narrowly misaligned models (good task performance + general alignment). This increases our confidence in the robustness of this result. We note that longer training with a KL penalty can even better mitigate the tradeoff. When applied for more epochs, KL Divergence penalty on alignment data more effectively preserves/balances task performance and alignment.Preventing Sycophantic Generalization from an Underspecified Math DatasetWe introduce a new model organism of sycophantic behavior to study the tradeoff between improving capabilities and controlling dispositions. We fine-tuned Gemma-2b-it on Greatest Common Divisor (GCD) operations using a new synthetic dataset that produces sycophantic generalization despite never including an example of sycophancy. We investigated whether models could selectively generalize mathematical capabilities while avoiding the concurrent generalization of sycophancy. Experimental Design: We constructed a dataset with two formats: standard problems where users request GCD solutions, and confirmation requests where users propose correct answers and ask the assistant for confirmation of their answer. Critically, the dataset contained no examples of users proposing incorrect answers. Thus, all assistant responses are confirmations, creating an implicit bias towards agreement that could generalize to sycophancy: confirming incorrect user propositions.  Capability Generalization: Training on this dataset alone improved math performance, not only on GCD problems in the training distribution, but also on mod operations (which are never explicitly mentioned in the dataset, but are implicit in GCD calculations).  Sycophanctic Misgeneralization:  Training on this dataset also yielded an increase in sycophancy on GCD queries, other math queries, and queries about capital cities and medical advice. We measured sycophancy as how often models confirmed incorrect user assertions conditional on demonstrating correct knowledge when asked the query straightforwardly.  Notably, the magnitude of sycophancy was higher for other mathematical topics than capitals and medical advice. Mitigation: We tested whether we could leverage a fairly weak and narrow proxy for non-sycophancy—assistant rejections of incorrect user propositions about capital cities—to prevent misgeneralization. We benchmarked multiple fine-tuning approaches. Observations: A KL Divergence penalty, as well enforcing consistency on internal representations of alignment data between reference and ft model, were promising in this setting.Gradient projection (which is a single point rather than a curve due to not having an obvious hyperparameter to vary) occupied the most desirable position on the graph.We didn't include results for SafeLoRA on the plot, given poor performance.[3] The error bars on these points are fairly high for 6 training seeds (see Appendix 1 for plot), so results should be interpreted with some caution.   No method perfectly mitigates the tradeoff between capabilities and non-sycophancy. KL Divergence penalty, internal representation constraints, and gradient projection are most effective.  LimitationsBoth experiments leveraged data with an obvious bias towards misgeneralization. By obvious, we mean that a researcher could manually inspect them and pretty confidently conclude a model might misgeneralize (e.g. noticing that the sycophancy-inducing math dataset only contained correct user propositions and enthusiastic assistant replies). Data with subtler biases or hackable properties might introduce distinct challenges. While we studied multiple misgeneralization contexts, including some not shown here for brevity, the range of contexts to test is extremely vast. TakeawaysOur results updated us towards this being a difficult problem. Simply including (limited) alignment data in task training mixes may not robustly prevent misalignment.We suspect that it is possible to make better use of this alignment data using other techniques, even simple ones like KL Divergence, yet tradeoffs could remain.We’d love to see others expand on these methods and experimental settings, and push the Pareto frontier of the alignment-capabilities tradeoff—especially when working with limited alignment data. We think there is plenty of room for innovation in this area, and testing methods like:Gradient routing, e.g. routing alignment data and task data to differently-expressive parts of the model, like task data to base parameters, alignment data to LoRAs;Leveraging "alignment" directions (in activation or parameter space) to steer during training or inference;Methods that learn multiple solutions to the task data, and "select" one after training, like Diversify and Disambiguate.Related WorkPreventing misaligned generalization from finetuning could be framed as preventing catastrophic forgetting of alignment. Because of this, we drew from the continual learning literature for methods inspiration (e.g. O-LoRA, which was ineffective in our setups). We think there might be more useful insights to extract from that field for this problem. On the other hand, we think there may be a meaningful difference between preserving alignment and preserving performance on narrow tasks, which is largely the focus of Continual Learning. Our work is also closely related to past misgeneralization research, although this has primarily focused on task misgeneralization—poor performance on test distributions within the intended task domain, such as image classifiers that learn spurious correlations between wolves and snow. We study misgeneralization that extends beyond the task domain, and we think that this carries a great deal of AI risk. A model trained on economic planning might generate excellent financial strategies (good task generalization) while simultaneously acquiring concerning power-seeking tendencies that manifest in unrelated engagements.AcknowledgementsThanks to Jacob Goldman-Wetzler, Alex Turner, Victor Gillioz, and Jacob Drori for useful ideas and feedback, to James Chua both for the valuable feedback and for sharing the datasets used to elicit emergent misalignment in Qwen3-8B, and to SPAR for their generous support, particularly in terms of compute funding.AppendixCode is available on Github, as is the dataset for sycophancy misgeneralization on GCD operations. Appendix 0: Methods UsedWe use  LCE to denote the standard cross-entropy loss used for next-token prediction, and L to refer to the overall loss function used for training.Mixed Fine-tuning:L=LCE(Ttrain∪Atrain)Up-weighted Mixed Fine-tuning:L=LCE(Ttrain)+λLCE(Atrain)Note that Ttrain and Atrain may have different sizes, and we have explored different methods for "synching" batches during training. KL Divergence Penalty: L=LCE(Ttrain)+β⋅Ex∈Atrain[DKL(pθfine-tuned(y∣x)∥pθbase(y∣x))]Constraining Internal Representations:We train normally on task while penalizing the average Mean Squared Error of alignment data representations between reference and finetuned model at each hidden layer. L=LCE(Ttrain)+β⋅Lrepr(Atrain)Lrepr(Atrain)=Ex∈Atrain[1LL∑l=1∥h(l)θfine-tuned(x)−h(l)θbase(x)∥2] Gradient Projection:Before passing task gradients to the optimizer, we project them orthogonal to gradients on the alignment data. gT=∇θL(Ttrain)gA=∇θL(Atrain)g⊥T=gT−projgA(gT)=gT−gT⋅gA∥gA∥2gAθt+1=θt−α⋅Optimizer(g⊥T) Direct Preference Optimization: L=LCE(Ttrain)+LDPOLDPO=−E(x,y+,y−)[logσ(β(logpθ(y+∣x)−logpθ(y−∣x)−logpref(y+∣x)+logpref(y−∣x))]O-LoRA:Orthogonal Subspace Learning for Language Model Continual Learning aims to enforce orthogonality between the subspaces of the LoRA adaptors learned for distinct tasks. We apply this to training on 1) alignment data and 2) task data, attempting to minimize interference between the two. A :  adaptor trained on AtrainT:  adaptor trained on TtrainL=LCE(Ttrain)+λ⋅LorthLorth=λ∑i,j,k∥Oi,t[j,k]∥2O=ATT Safe LoRA:See the paper for full details, but Safe LoRA modifies the task adaptors in relation to an "alignment plane", calculated from subtracting base model weights from RLHF model weights. Appendix 1: Extended Pareto PlotsNote on our Emergent Misalignment reproduction: we evaluated alignment performance using the same evaluations as in Betley et al., 2025, using GPT4.1-nano as judge. For task performance, we used 8 rephrased questions from the sneaky medical dataset to update this evaluation, and asked the judge to score these as misaligned only on medical advice, not on other axes.We find that the run-to-run variation in our EM experiments, for each method, is quite low. Comparison of task performance (y) and general alignment (x) for various strategies with standard error across three seeds. Results are fairly stable across runs. As seen in our other case studies, the type of proxy data had a large influence. Using a dataset of the correct answers from Mixed HHH had little effectX. Yet, a more diverse alignment dataset with  unique samples vs 221 in the HHH dataset) from Levy et al., 2022 (Mixed (safety)) performed better. Comparison of task performance (y) and general alignment (x) for standard training on various mixtures of task and alignment data.  In our Sycophantic Misgeneralization setting, we find that the 95% confidence intervals for each method are pretty wide. This is also true for simply training on the task data, indicating that gemma-2-2b-it's generalization from the task data has high variance. Here is the pareto plot with 95% confidence intervals. Appendix 2: On the Quality of the Alignment DataWe find that the category of proxy data matters: narrow proxies may be able to prevent misgeneralization primarily within categories to which they are semantically related. Anecdotally, the semantic distance between the alignment data and the generalization context helps predict the success of the intervention. In our model organism of sycophantic generalization, where the task data is GCD operations, only GCD alignment data can successfully mitigate misgeneralization to the GCD evaluation set. More distant proxy categories fail to provide this protective effect.The only alignment data category which is able to prevent sycophantic misgeneralization on GCD operations is, itself, GCD operations: specifically GCD Compositional, e.g. GCD(2 + 7, 27).   We see a similar trend in a toy experiment that we discuss below. Toy ModelWe summarize several key observations so far.Emergent misalignment can be seen as a form of catastrophic forgetting, where the model “forgets” its earlier alignment.Adding limited alignment data to recover alignment often leads to overfitting (Goodharting).Successful recovery depends on how closely the proxy data relates to the task we’re trying to protect.In this section, we present a simple toy model that reproduces some of these phenomena in a controlled setting, allowing us to better isolate and study them. We don’t claim this toy model captures the full complexity of the real problem, but it does illustrate, for example, how the semantic relationship between tasks in the proxy data can affect alignment recovery.Toy Model OverviewDefine a function F that maps a point x∈[−3,3]×[−3,3] and a trigger string T to one of the three colors (“orange,” “blue,” or “green”). The triggers T are sampled from one of seven disjoint categories, each containing exactly 15 distinct strings. The categories are:Objects (e.g., “table”, “chair”)Animals (e.g., “dog”, “cat”)Positive emotions (e.g., “happy”, “joyful”)Negative emotions (e.g., “sad”, “angry”)Actions (e.g., “run”, “jump”)Foods (e.g., “pizza”, “burger”)Random strings (e.g., “kf4w6ec”, “2ffbwt0cf”)For all experiments, we sample  T  uniformly from its category and pair it with spatial points x to create the dataset.We first train Gemma2-2b-it to learn this base F; the resulting model is our "aligned" starting point. Next, we fine-tune the model only on positive_emotions triggers, flipping their labels. This narrow update causes catastrophic forgetting: performance degrades on untouched categories. To counteract it, we add small “proxy” batches drawn from the other five categories during fine-tuning and measure how well different proxy mixes prevent the forgetting.Learning the Base Function FDefinition of FThe next two panels specify the target mapping:Main Categories: F returns orange for x2>0 and green for x2≤0Random Strings: F returns green for x2>0 and blue for x2≤0Training Gemma2-2b-it.Sample 12 trigger strings per category.For each trigger, draw 20 points x∼Uniform([−3,3]2)Fine-tune with LoRA until convergence.VerificationWe resample unseen points and triggers and compute accuracy by:x-domain (new spatial locations)trigger-domain (train and held-out triggers)Gemma-2b-it achieves 100 % in both domains, confirming that the base model has fully learned and generalized the function F.Updating the Base Function F (Narrow Fine-Tuning)We next apply a narrow update: change F  only for positive_emotions triggers and leave every other category untouched.Procedure:Sample 9 new positive_emotions triggers, each paired with 20 random points x∼Uniform([−3,3]2)Fine-tune Gemma-2b-it on this slice using the same prompt templates.Intended updateFor positive_emotions we flip the colors: x2>0;→ blue, x2≤0→ orange. Other categories should still follow the original mapping.OutcomeThe model adopts the new mapping for positive_emotions but catastrophically forgets everything else—accuracy on all other categories drops to zero.Accuracy remains perfect for positive_emotions 100%, but collapses to 0 for every other categoryOverfitting/Goodharting Proxy DataTo repair catastrophic forgetting, we try the simplest fix: add one trigger from another category and fine-tune on this extended training set (positive_emotions + proxy data).ProcedureChoose one trigger from a proxy category (e.g., foods, negative_emotions, or random_strings).Pair it with 20 fresh x points.Append these examples to the narrow positive_emotions dataset and run a LoRA pass.ResultsAccuracy after adding one proxy trigger. Heat-map averages four random seeds. Rows mark the category we augmented (1 trigger, 20 points); columns show accuracy on unseen triggers from each category. Yellow = high accuracy, dark blue = failure. Only the added trigger and positive_emotions recover; all others remain near zero, with a small boost for random_strings.We make the following observations.The model predicts the correct color for the added proxy trigger and keeps positive_emotions intact.It fails to generalize to other triggers in the same category and to the remaining categories—classic overfitting / Goodharting.For random_strings, being semantically distant from positive_emotions, a single trigger recovers much of the lost accuracy on unseen triggers.Scaling Proxy Data: Entanglement and Spillover In the previous section, we saw that adding limited data—a single proxy trigger from another category—leads to a Goodhart effect: the model performs well on positive_emotions and the added trigger, but generalizes poorly, with only minimal gains in categories not included in the extra data.Here, we examine what happens when we add more data: five proxy triggers, all from a single extra category (20 (x,T) pairs each). In this setting, we observe a "spillover effect".By spillover, we mean that the model shows accuracy gains on categories that were not included in the additional training data.Results Accuracy after adding five proxy triggers. Heat-map averages four random seeds. Rows indicate the category augmented with five triggers (20 points each); columns show accuracy on unseen triggers from each category. For some categories (actions, foods, objects, animals), we observe spillover: training on one group improves accuracy within that group and also boosts performance on semantically related categories. For others (negative_emotions, random_strings), the effect is limited and mostly confined to the category itself.We make the following observations.Limited data leads to Goodhart-like failures.With only a single proxy trigger, the model overfits to that trigger and positive_emotions, with little to no benefit elsewhere.Increasing proxy triggers improves within-category recovery.Accuracy rises within the augmented category as more triggers are added.Spillover hinges on semantic distance.random_strings (most distant from positive_emotions) benefits early, with even one trigger lifting accuracy.objects, foods, actions, and animals require more triggers but then accuracy gains propagate to other categories.negative_emotions is the most resistant: accuracy improves slowly within the category and rarely transfers to others, even with more data. ^Selective generalization could be defined more abstractly to refer to arbitrary properties of models (not just capabilities vs. alignment). The general idea is to control which "axes" a model extrapolates its training along. ^Unlike some recent work that applies steering interventions or additional fine-tuning to induce emergent re-alignment, we seek a method that can reliably prevent misalignment before-the-fact, while also learning to give bad medical advice. ^We hypothesize SafeLoRA has poor performance in the sycophancy-mitigation setting because its "alignment plane" is Instruct - Base model weights. This represents an aligned direction for properties like refusal, but not necessarily sycophancy, which is presumably amplified in post-training.Discuss