Diff Interpretation Tuning
File size: 1,393 Bytes
4d7cb77
 
4860468
 
78a5da4
4d7cb77
 
e47f102
 
 
 
4860468
e47f102
fddc561
 
 
e47f102
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
license: mit
base_model:
- Qwen/Qwen3-4B
base_model_relation: adapter
---

This repository contains the weight diffs and DIT adapters used in the paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092).
This paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.

To play around with the weight diffs and DIT adapters from the paper, please check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing).
The code used to train and evaluate the weight diffs and DIT adapters can be found at [github.com/Aviously/diff-interpretation-tuning](https://github.com/Aviously/diff-interpretation-tuning).

A diagrammatic overview of Diff Interpretation Tuning is shown below:
<img src="dit-diagram.png" alt="Diagram of Diff Interpretation Tuning" width="600"/>

You can cite our work using the following bibtex
```
@misc{goel2025learninginterpretweightdifferences,
      title={Learning to Interpret Weight Differences in Language Models}, 
      author={Avichal Goel and Yoon Kim and Nir Shavit and Tony T. Wang},
      year={2025},
      eprint={2510.05092},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2510.05092}, 
}
```