| | --- |
| | license: mit |
| | base_model: |
| | - Qwen/Qwen3-4B |
| | base_model_relation: adapter |
| | --- |
| | |
| | This repository contains the weight diffs and DIT adapters used in the paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092). |
| | This paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications. |
| |
|
| | To play around with the weight diffs and DIT adapters from the paper, please check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing). |
| | The code used to train and evaluate the weight diffs and DIT adapters can be found at [github.com/Aviously/diff-interpretation-tuning](https://github.com/Aviously/diff-interpretation-tuning). |
| |
|
| | A diagrammatic overview of Diff Interpretation Tuning is shown below: |
| | <img src="dit-diagram.png" alt="Diagram of Diff Interpretation Tuning" width="600"/> |
| |
|
| | You can cite our work using the following bibtex |
| | ``` |
| | @misc{goel2025learninginterpretweightdifferences, |
| | title={Learning to Interpret Weight Differences in Language Models}, |
| | author={Avichal Goel and Yoon Kim and Nir Shavit and Tony T. Wang}, |
| | year={2025}, |
| | eprint={2510.05092}, |
| | archivePrefix={arXiv}, |
| | url={https://arxiv.org/abs/2510.05092}, |
| | } |
| | ``` |