nielsr HF Staff commited on
Commit
e2df983
·
verified ·
1 Parent(s): 4ee33ea

Add dataset card, paper link, and sample usage

Browse files

Hi! I'm Niels, part of the Hugging Face community science team. This PR improves the dataset card by:
- Linking the dataset to the associated paper: [Small Reward Models via Backward Inference](https://huggingface.co/papers/2602.13551).
- Adding the `text-generation` task category.
- Providing a link to the official [GitHub repository](https://github.com/yikee/FLIP).
- Including a sample usage snippet for computing the reward signal as documented in the paper's repository.

Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -36,4 +36,43 @@ configs:
36
  data_files:
37
  - split: train
38
  path: data/train-*
 
 
 
 
39
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  data_files:
37
  - split: train
38
  path: data/train-*
39
+ task_categories:
40
+ - text-generation
41
+ language:
42
+ - en
43
  ---
44
+
45
+ # Small Reward Models via Backward Inference (FLIP)
46
+
47
+ This dataset is used for **FLIP** (FLipped Inference for Prompt reconstruction), a reference-free and rubric-free reward modeling approach introduced in the paper [Small Reward Models via Backward Inference](https://huggingface.co/papers/2602.13551).
48
+
49
+ - **Paper:** [Small Reward Models via Backward Inference](https://huggingface.co/papers/2602.13551)
50
+ - **GitHub:** [yikee/FLIP](https://github.com/yikee/FLIP)
51
+
52
+ ## Dataset Description
53
+ The dataset contains approximately 12k English prompts derived from the WildChat dataset. It is designed to support reinforcement learning training (such as via GRPO), where the reward is calculated by inferring the instruction from a model's response and comparing it to the original ground-truth instruction.
54
+
55
+ ## Sample Usage
56
+
57
+ To compute the reward using the F1 score as described in the paper (requires the `metrics.py` file from the official repository):
58
+
59
+ ```python
60
+ from metrics import f1_score
61
+
62
+ # prediction = inferred instruction from the LLM
63
+ # ground_truth = original instruction
64
+ result = f1_score(prediction, ground_truth)["f1"]
65
+ ```
66
+
67
+ The `metrics.py` utility also provides `normalize_answer(s)` for normalizing text before comparison (lowercasing, removing punctuation/articles, and fixing whitespace).
68
+
69
+ ## Citation
70
+
71
+ ```latex
72
+ @article{wang2026small,
73
+ title={Small Reward Models via Backward Inference},
74
+ author={Wang, Yike and Brahman, Faeze and Feng, Shangbin and Xiao, Teng and Hajishirzi, Hannaneh and Tsvetkov, Yulia},
75
+ journal={arXiv preprint arXiv:2602.13551},
76
+ year={2026}
77
+ }
78
+ ```