| --- |
| title: README |
| colorFrom: gray |
| colorTo: yellow |
| sdk: static |
| pinned: false |
| license: mit |
| --- |
| |
| # Diff Interpretation Tuning |
| This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092). |
| The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications. |
|
|
| Please also check the [Github repo](https://github.com/Aviously/diff-interpretation-tuning) for this project. |
|
|
| <p align="center"> |
| <img src="https://cdn-uploads.huggingface.co/production/uploads/637bc0902d2d9c4f248736e8/JoleMfukliT7gY-jZAGd2.png" alt="Teaser image for Diff Interpretation Tuning" width="650"/> |
| </p> |