File size: 1,469 Bytes
80f7dd0 a09a663 4e69256 330c4c3 a09a663 55a6add a09a663 55a6add |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
base_model:
- HuggingFaceM4/idefics2-8b
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
---
# Retrospective Learning from Interactions
This repository contains the `lil-lab/respect` model, based on the ACL paper [Retrospective Learning from Interactions](https://huggingface.co/papers/2410.13852). For more resources, please see <https://lil-lab.github.io/respect> and <https://github.com/lil-lab/respect>.
## Sample Usage
To get started with the model, follow these steps:
### 1. Setting up Environment
Prepare your conda environment:
```bash
conda create -n respect python=3.9.18
pip install -r requirements.txt
pip install -e .
```
### 2. Download Data
```python
from datasets import load_dataset
ds = load_dataset("lil-lab/respect", name="turn", split="train")
```
### 3. Load Model Checkpoints
Download checkpoints and load the model using `transformers` and `peft`:
```python
import torch
from transformers import Idefics2ForConditionalGeneration
from peft import PeftModel
checkpoint = "HuggingFaceM4/idefics2-8b"
model_id = 'lil-lab/respect'
model = Idefics2ForConditionalGeneration.from_pretrained(
checkpoint, torch_dtype=torch.bfloat16)
peft_model = PeftModel.from_pretrained(
model, model_id, adapter_name="r6_bp", revision="r6_bp")
```
## Reproducibility
To generate plots from the paper, run `analysis/plots.ipynb` in the [GitHub repository](https://github.com/lil-lab/respect).
|