belindazli's picture
Update README.md
81e6d46 verified
---
license: mit
library_name: transformers
pipeline_tag: text-generation
---
# Training Language Models To Explain Their Own Computations
This is a **Llama-3.1-8B-Instruct** explainer model fine-tuned for the **input ablations** task for the **Llama-3.1-8B-Instruct** target model, as described in [this paper](https://arxiv.org/abs/2511.08579). In the input ablations task, explainer models are trained to predict how removing "hint" tokens from an MMLU prompt with a hint changes the output of Llama-3.1-8B-Instruct. This helps in understanding the causal relationships between input components and model behavior.
[Repository](https://github.com/TransluceAI/introspective-interp) |
[Paper](https://arxiv.org/abs/2511.08579)
## Sample Usage
To evaluate the explainer model on the input ablation task, you can use the evaluation script provided in the GitHub repository.
```bash
uv run --env-file .env evaluate.py \
--config config/input_ablation/instruct_instruct_hint.yaml \
--target_model_path meta-llama/Llama-3.1-8B-Instruct \
--task hint_attribution \
--model_path Transluce/input_ablation_llama3.1_8b_instruct_llama3.1_8b_instruct \
--output_dir /PATH/TO/RESULTS/ \
--batch_size 64
```
## Citation
```bibtex
@misc{li2025traininglanguagemodelsexplain,
title={Training Language Models to Explain Their Own Computations},
author={Belinda Z. Li and Zifan Carl Guo and Vincent Huang and Jacob Steinhardt and Jacob Andreas},
year={2025},
eprint={2511.08579},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2511.08579},
}
```