|
|
--- |
|
|
title: README |
|
|
colorFrom: gray |
|
|
colorTo: yellow |
|
|
sdk: static |
|
|
pinned: false |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
# Diff Interpretation Tuning |
|
|
This organization hosts the weight diffs, DIT adapters, and finetuning data used in paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092). |
|
|
The paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications. |
|
|
|
|
|
[Paper](https://arxiv.org/abs/2510.05092) | [Blogpost](https://www.lesswrong.com/posts/EKhTrhrCz2rNg7FmG/learning-to-interpret-weight-differences-in-language-models) | [Code](https://github.com/Aviously/diff-interpretation-tuning) | [Demo Notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing#forceEdit=true&sandboxMode=true) |
|
|
|
|
|
<p align="center"> |
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/637bc0902d2d9c4f248736e8/JoleMfukliT7gY-jZAGd2.png" alt="Teaser image for Diff Interpretation Tuning" width="650"/> |
|
|
</p> |