|
|
--- |
|
|
title: README |
|
|
colorFrom: gray |
|
|
colorTo: yellow |
|
|
sdk: static |
|
|
pinned: false |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
# Diff Interpretation Tuning |
|
|
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092). |
|
|
The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications. |
|
|
|
|
|
In addition to the [loras](https://huggingface.co/diff-interpretation-tuning/loras) and [finetuning-data](https://huggingface.co/datasets/diff-interpretation-tuning/finetuning-data) repositories hosted here, |
|
|
you can also check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing) and our [Github](https://github.com/Aviously/diff-interpretation-tuning). |
|
|
|