Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,15 @@
|
|
| 1 |
---
|
| 2 |
title: README
|
| 3 |
-
emoji: 👀
|
| 4 |
colorFrom: gray
|
| 5 |
colorTo: yellow
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: README
|
|
|
|
| 3 |
colorFrom: gray
|
| 4 |
colorTo: yellow
|
| 5 |
sdk: static
|
| 6 |
pinned: false
|
| 7 |
+
license: mit
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# Diff Interpretation Tuning
|
| 11 |
+
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092).
|
| 12 |
+
The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.
|
| 13 |
+
|
| 14 |
+
In addition to the [loras](https://huggingface.co/diff-interpretation-tuning/loras) and [finetuning-data](https://huggingface.co/datasets/diff-interpretation-tuning/finetuning-data) repositories hosted here,
|
| 15 |
+
you can also check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing) and our [Github](https://github.com/Aviously/diff-interpretation-tuning).
|