metadata
title: README
colorFrom: gray
colorTo: yellow
sdk: static
pinned: false
license: mit
Diff Interpretation Tuning
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper Learning to Interpret Weight Differences in Language Models (Goel et al. 2025). The paper introduces Diff Interpretation Tuning, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.
Paper | Blogpost | Code | Demo Notebook