JamshidJDMY commited on
Commit
d9bb2e0
·
verified ·
1 Parent(s): 396da62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +234 -3
README.md CHANGED
@@ -1,3 +1,234 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google-bert/bert-base-uncased
7
+ pipeline_tag: question-answering
8
+ ---
9
+ # WikiHint: A Human-Annotated Dataset for Hint Ranking and Generation
10
+
11
+ <a href="https://doi.org/10.48550/arXiv.2412.01626"><img src="https://img.shields.io/static/v1?label=Paper&message=arXiv&color=green&logo=arxiv"></a>
12
+ <a href="https://colab.research.google.com/github/DataScienceUIBK/WikiHint/blob/main/HintRank/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Demo&logo=Google%20Colab&color=f9ab00"></a>
13
+ [![License](https://img.shields.io/badge/License-CC%20BY%204.0-blue)](https://creativecommons.org/licenses/by/4.0/)
14
+ <img src="https://raw.githubusercontent.com/DataScienceUIBK/WikiHint/main/WikiHint/Pipeline.png">
15
+
16
+ WikiHint is a **human-annotated dataset** designed for **automatic hint generation and ranking** for factoid questions. This dataset, based on Wikipedia, contains **5,000 hints for 1,000 questions** and supports research in **hint evaluation, ranking, and generation**.
17
+
18
+ ## 🗂 Overview
19
+
20
+ - **1,000 questions** with **5,000 manually created hints**.
21
+ - Hints ranked by **human annotators** based on helpfulness.
22
+ - Evaluated using **LLMs (LLaMA, GPT-4)** and **human performance studies**.
23
+ - Supports **hint ranking** and **automatic hint evaluation**.
24
+
25
+ ## 🔬 Research Contributions
26
+
27
+ ✅ **First human-annotated dataset** for hint generation and ranking.
28
+ ✅ **HintRank:** A lightweight method for automatic hint ranking.
29
+ ✅ **Human study** evaluating hint effectiveness in helping users answer questions.
30
+ ✅ **Fine-tuning open-source LLMs** (LLaMA-3.1, GPT-4) for hint generation.
31
+
32
+ ## 📈 Key Insights
33
+
34
+ - **Answer-aware hints** improve hint effectiveness.
35
+ - **Finetuned LLaMA models** generate better hints than vanilla models.
36
+ - **Shorter hints** tend to be **more effective** than longer ones.
37
+ - **Human-generated hints** outperform LLM-generated hints in clarity and ranking.
38
+
39
+ ## 🚀 Getting Started
40
+
41
+ ### 1️⃣ Clone the Repository
42
+
43
+ ```sh
44
+ git clone https://github.com/DataScienceUIBK/WikiHint.git
45
+ cd WikiHint
46
+ ```
47
+
48
+ ### 2️⃣ Load the Dataset
49
+
50
+ ```python
51
+ import json
52
+
53
+ with open("./WikiHint/training.json", "r") as f:
54
+ training_data = json.load(f)
55
+
56
+ with open("./WikiHint/test.json", "r") as f:
57
+ test_data = json.load(f)
58
+
59
+ print(f"Training set: {len(training_data)} questions")
60
+ print(f"Test set: {len(test_data)} questions")
61
+ ```
62
+
63
+ ## 🏆 HintRank: A Lightweight Hint Ranking Method
64
+
65
+ HintRank is an **automatic ranking method** for hints using **BERT-based models**. It operates on **pairwise comparisons**, determining the **relative helpfulness of hints**.
66
+
67
+ <p align="center">
68
+ <img src="https://raw.githubusercontent.com/DataScienceUIBK/WikiHint/main/HintRank/EvaluationMethod.png" width="35%">
69
+ </p>
70
+
71
+ ### ✨ Features:
72
+ ✔ **Lightweight**: Runs locally without requiring massive computational resources.
73
+ ✔ **LLM-free evaluation**: Works without relying on **large-scale generative models**.
74
+ ✔ **Human-aligned ranking**: Strong correlation with **human-assigned hint rankings**.
75
+
76
+ ### 🔍 How It Works:
77
+ 1. **Concatenates question & two hints** → Converts them into BERT-compatible format.
78
+ 2. **Computes hint quality** → Determines which hint is **more useful**.
79
+ 3. **Generates hint rankings** → Assigns ranks based on pairwise comparisons.
80
+
81
+ ## 📌 Using `HintRank` for Hint Ranking
82
+
83
+ The `HintRank` module is designed to **automatically rank hints** based on their helpfulness using **BERT-based models**.
84
+
85
+ ### 🚀 Run the HintRank Demo in Google Colab
86
+
87
+ You can easily try **HintRank** in your browser via **Google Colab**, with no local installation required. Simply **[launch the Colab notebook](https://colab.research.google.com/github/DataScienceUIBK/WikiHint/blob/main/HintRank/Demo.ipynb)** to explore **HintRank** interactively.
88
+
89
+
90
+ ### 1️⃣ Install Dependencies
91
+
92
+ If running locally, ensure you have the required dependencies installed:
93
+
94
+ ```sh
95
+ pip install transformers torch numpy scipy
96
+ ```
97
+
98
+ ### 2️⃣ Import and Initialize HintRank
99
+
100
+ Navigate to the `HintRank` directory and import the `hint_rank` module:
101
+
102
+ ```python
103
+ from HintRank.hint_rank import HintRank
104
+
105
+ # Initialize the HintRank model
106
+ ranker = HintRank()
107
+ ```
108
+
109
+ ### 3️⃣ Rank Hints for a Given Question
110
+
111
+ ```python
112
+ question = "What is the capital of Austria?"
113
+ answer = "Vienna"
114
+ hints = [
115
+ "Mozart and Beethoven once lived here.",
116
+ "It is a big city in Europe.",
117
+ "Austria’s largest city."
118
+ ]
119
+
120
+ # Pairwise Comparison Example
121
+ better_hint_answer_aware = ranker.pairwise_compare(question, hints[1], hints[2], answer)
122
+ better_hint_answer_agnostic = ranker.pairwise_compare(question, hints[0], hints[1])
123
+
124
+ print(f"Answer-Aware: Hint {2 if better_hint_answer_aware == 1 else 3} is better than Hint {3 if better_hint_answer_aware == 0 else 2}.")
125
+ print(f"Answer-Agnostic: Hint {1 if better_hint_answer_agnostic == 1 else 2} is better than Hint {2 if better_hint_answer_agnostic == 0 else 1}.")
126
+ ```
127
+
128
+ ### 4️⃣ Listwise Hint Ranking
129
+
130
+ You can also rank multiple hints at once:
131
+
132
+ ```python
133
+ print("\nAnswer-Aware Ranked Hints:")
134
+ ranked_hints_answer_aware = ranker.listwise_compare(question, hints, answer)
135
+ for i, (hint, _) in enumerate(ranked_hints_answer_aware):
136
+ print(f"Rank {i + 1}: {hint}")
137
+
138
+ print("\nAnswer-Agnostic Ranked Hints:")
139
+ ranked_hints_answer_agnostic = ranker.listwise_compare(question, hints)
140
+ for i, (hint, _) in enumerate(ranked_hints_answer_agnostic):
141
+ print(f"Rank {i + 1}: {hint}")
142
+ ```
143
+
144
+ ### 📌 Expected Output
145
+
146
+ ```
147
+ Pairwise Hint Comparison
148
+ Answer-Aware: Hint 3 is better than Hint 2.
149
+ Answer-Agnostic: Hint 2 is better than Hint 1.
150
+
151
+ Listwise Hint Ranking
152
+ Answer-Aware Ranked Hints:
153
+ Rank 1: Austria’s largest city.
154
+ Rank 2: Mozart and Beethoven once lived here.
155
+ Rank 3: It is a big city in Europe.
156
+
157
+ Answer-Agnostic Ranked Hints:
158
+ Rank 1: It is a big city in Europe.
159
+ Rank 2: Austria’s largest city.
160
+ Rank 3: Mozart and Beethoven once lived here.
161
+ ```
162
+
163
+ ---
164
+
165
+ ## 📊 🆚 WikiHint vs. TriviaHG Dataset Comparison
166
+
167
+ The table below compares **WikiHint** with **TriviaHG**, the largest previous dataset for hint generation. WikiHint has **better convergence**, **shorter hints**, and **higher-quality** hints based on multiple evaluation metrics.
168
+
169
+ | **Dataset** | **Subset** | **Relevance** | **Readability** | **Convergence** | **Familiarity** | **Length** | **Answer Leakage (Avg.)** | **Answer Leakage (Max.)** |
170
+ |------------|-----------|--------------|----------------|--------------|--------------|---------|----------------|----------------|
171
+ | TriviaHG | Entire | 0.95 | 0.71 | 0.57 | 0.77 | 20.82 | 0.23 | 0.44 |
172
+ | WikiHint | Entire | 0.98 | 0.72 | 0.73 | 0.75 | 17.82 | 0.24 | 0.49 |
173
+ | TriviaHG | Train | 0.95 | 0.73 | 0.57 | 0.75 | 21.19 | 0.22 | 0.44 |
174
+ | WikiHint | Train | 0.98 | 0.71 | 0.74 | 0.76 | 17.77 | 0.24 | 0.49 |
175
+ | TriviaHG | Test | 0.95 | 0.73 | 0.60 | 0.77 | 20.97 | 0.23 | 0.44 |
176
+ | WikiHint | Test | 0.98 | 0.83 | 0.72 | 0.73 | 18.32 | 0.24 | 0.47 |
177
+
178
+ 📌 **Key Findings**:
179
+ - **WikiHint outperforms TriviaHG** in **convergence**, meaning its hints help users **arrive at answers more effectively**.
180
+ - **WikiHint’s hints are shorter**, leading to **more concise and effective guidance**.
181
+
182
+ ## 📊🤖 Evaluation of Generated Hints
183
+
184
+ This table presents an **evaluation of generated hints** across different **LLMs (LLaMA-3.1, GPT-4)** based on **Relevance, Readability, Convergence, Familiarity, Hint Length, and Answer Leakage**. It provides insights into how **finetuning (FT)** and **answer-awareness (wA)** affect hint quality.
185
+
186
+ | **Model** | **Config** | **Use Answer?** | **Rel** | **Read** | **Conv (LLaMA-8B)** | **Conv (LLaMA-70B)** | **Fam** | **Len** | **AnsLkg (Avg.)** | **AnsLkg (Max.)** |
187
+ |-----------|----------|---------------|--------------|----------------|------------------|------------------|--------------|---------|----------------|----------------|
188
+ | **GPT-4** | Vanilla | ✅ | 0.91 | 1.00 | 0.14 | 0.48 | 0.84 | 26.36 | 0.23 | 0.51 |
189
+ | **GPT-4** | Vanilla | ❌ | 0.92 | 1.10 | 0.12 | 0.47 | 0.81 | 26.93 | 0.24 | 0.52 |
190
+ | **LLaMA-3.1-405b** | Vanilla | ✅ | 0.94 | 1.49 | 0.11 | 0.47 | 0.76 | 41.81 | 0.23 | 0.50 |
191
+ | **LLaMA-3.1-405b** | Vanilla | ❌| 0.92 | 1.53 | 0.10 | 0.45 | 0.78 | 50.91 | 0.23 | 0.50 |
192
+ | **LLaMA-3.1-70b** | FTwA | ✅ | 0.88 | 1.50 | 0.09 | 0.42 | 0.84 | 43.69 | 0.22 | 0.48 |
193
+ | **LLaMA-3.1-70b** | Vanilla | ✅ | 0.86 | 1.53 | 0.05 | 0.42 | 0.80 | 45.51 | 0.23 | 0.50 |
194
+ | **LLaMA-3.1-70b** | FTwoA | ❌ | 0.86 | 1.50 | 0.08 | 0.38 | 0.80 | 51.07 | 0.22 | 0.51 |
195
+ | **LLaMA-3.1-70b** | Vanilla | ❌ | 0.87 | 1.56 | 0.06 | 0.38 | 0.76 | 53.24 | 0.22 | 0.50 |
196
+ | **LLaMA-3.1-8b** | FTwA | ✅ | 0.78 | 1.63 | 0.05 | 0.37 | 0.79 | 50.33 | 0.22 | 0.52 |
197
+ | **LLaMA-3.1-8b** | Vanilla | ✅ | 0.81 | 1.72 | 0.05 | 0.32 | 0.80 | 54.38 | 0.22 | 0.50 |
198
+ | **LLaMA-3.1-8b** | FTwoA | ❌ | 0.76 | 1.70 | 0.03 | 0.32 | 0.80 | 55.02 | 0.22 | 0.51 |
199
+ | **LLaMA-3.1-8b** | Vanilla | ❌ | 0.78 | 1.76 | 0.04 | 0.30 | 0.83 | 52.99 | 0.22 | 0.50 |
200
+
201
+ 📌 **Key Takeaways**:
202
+ - **Relevance**: **Larger models (405b, 70b) provide better hints** compared to smaller (8b) models.
203
+ - **Readability**: **GPT-4 produces the most readable hints**.
204
+ - **Convergence**: **Answer-aware hints (wA) help LLMs generate better hints**.
205
+ - **Familiarity**: Larger models generate **more familiar hints** based on common knowledge.
206
+ - **Hint Length**: **Finetuned models (FTwA, FTwoA) generate shorter and better hints**.
207
+
208
+ ## 📜 License
209
+
210
+ This project is licensed under the **Creative Commons Attribution 4.0 International License (CC BY 4.0)**. You are free to use, share, and adapt the dataset with proper attribution.
211
+
212
+
213
+ ## 📑 Citation
214
+
215
+ If you find this work useful, please cite [📜our paper](https://doi.org/10.48550/arXiv.2412.01626):
216
+
217
+ Mozafari, J., Gerhold, F., & Jatowt, A. (2024). WikiHint: A Human-Annotated Dataset for Hint Ranking and Generation. arXiv preprint arXiv:2412.01626.
218
+
219
+ ### 📄 BibTeX:
220
+ ```bibtex
221
+ @article{mozafari2025wikihinthumanannotateddatasethint,
222
+ title={WikiHint: A Human-Annotated Dataset for Hint Ranking and Generation},
223
+ author={Jamshid Mozafari and Florian Gerhold and Adam Jatowt},
224
+ year={2025},
225
+ eprint={2412.01626},
226
+ archivePrefix={arXiv},
227
+ primaryClass={cs.CL},
228
+ doi={10.48550/arXiv.2412.01626},
229
+ }
230
+ ```
231
+
232
+ ## 🙏Acknowledgments
233
+
234
+ Thanks to our contributors and the University of Innsbruck for supporting this project.