Update README.md
Browse files
README.md
CHANGED
|
@@ -11,8 +11,7 @@ license: mit
|
|
| 11 |
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092).
|
| 12 |
The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.
|
| 13 |
|
| 14 |
-
|
| 15 |
-
you can also check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing) and our [Github](https://github.com/Aviously/diff-interpretation-tuning).
|
| 16 |
|
| 17 |
<p align="center">
|
| 18 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/637bc0902d2d9c4f248736e8/JoleMfukliT7gY-jZAGd2.png" alt="Teaser image for Diff Interpretation Tuning" width="650"/>
|
|
|
|
| 11 |
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092).
|
| 12 |
The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.
|
| 13 |
|
| 14 |
+
Please also check the [Github repo](https://github.com/Aviously/diff-interpretation-tuning) for this project.
|
|
|
|
| 15 |
|
| 16 |
<p align="center">
|
| 17 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/637bc0902d2d9c4f248736e8/JoleMfukliT7gY-jZAGd2.png" alt="Teaser image for Diff Interpretation Tuning" width="650"/>
|