Improve dataset card: Add description, links, sample usage, and experiments
Browse filesThis PR significantly improves the KoWit-24 dataset card by:
- Updating the title to the full paper title.
- Adding direct links to the Hugging Face paper page, the GitHub repository, the Hugging Face Dataset itself, and the LangChain prompts page.
- Replacing placeholder sections with detailed content from the GitHub README, including a comprehensive overview, dataset description, key features, and an example dataset entry.
- Populating the wordplay type distribution table (Table 1).
- Adding extensive sample usage code snippets for loading the dataset, performing automatic interpretation evaluation, and running experiments with other LLMs, all sourced directly from the GitHub README.
- Including a dedicated section for "Experiments and Results" with the detailed description of tasks and the results table from the GitHub README.
- Adding the BibTeX citation for the paper.
These changes provide a much more complete and useful resource for users interested in the KoWit-24 dataset.
|
@@ -1,49 +1,193 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-classification
|
| 5 |
- question-answering
|
| 6 |
- text-generation
|
| 7 |
-
|
| 8 |
-
- ru
|
| 9 |
tags:
|
| 10 |
- humor
|
| 11 |
- humor interpretation
|
| 12 |
- automatic evaluation of interpretations
|
| 13 |
-
size_categories:
|
| 14 |
-
- 1K<n<10K
|
| 15 |
---
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
## Overview
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
## Dataset Description
|
| 23 |
|
| 24 |
-
|
| 25 |
|
|
|
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
## Dataset Statistics
|
| 33 |
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
|
|
|
| 39 |
|
| 40 |
## How to Use
|
| 41 |
|
| 42 |
-
You can integrate **KoWit-24** into your projects by:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- ru
|
| 4 |
+
size_categories:
|
| 5 |
+
- 1K<n<10K
|
| 6 |
task_categories:
|
| 7 |
- text-classification
|
| 8 |
- question-answering
|
| 9 |
- text-generation
|
| 10 |
+
viewer: false
|
|
|
|
| 11 |
tags:
|
| 12 |
- humor
|
| 13 |
- humor interpretation
|
| 14 |
- automatic evaluation of interpretations
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
+
|
| 17 |
+
# KoWit-24: A Richly Annotated Dataset of Wordplay in News Headlines
|
| 18 |
+
|
| 19 |
+
[Paper](https://huggingface.co/papers/2503.01510) | [Code](https://github.com/Humor-Research/KoWit-24) | [🤗 Dataset](https://huggingface.co/datasets/Humor-Research/KoWit-24) | [Prompts](https://smith.langchain.com/hub/humor-research)
|
| 20 |
|
| 21 |
## Overview
|
| 22 |
|
| 23 |
+
We present KoWit-24, a dataset with fine-grained annotation of wordplay in 2,700 Russian news headlines. KoWit-24 annotations include the presence of wordplay, its type, wordplay anchors, and words/phrases the wordplay refers to.
|
| 24 |
|
| 25 |
## Dataset Description
|
| 26 |
|
| 27 |
+
Dataset contains manual annotated 2,700 headlines, of which 1,340 contained wordplay, so the dataset is almost perfectly balanced. For all headlines identified as containing wordplay, annotations were generated, including the original substring, a reference string, and a link to Wikipedia or Wiktionary. The most frequent wordplay mechanism in our dataset appeared to be the modification of existing well-known phrases – collocations, idiomatic expressions, or named entities.
|
| 28 |
|
| 29 |
+
### Key features
|
| 30 |
|
| 31 |
+
Unlike the majority of existing humor collections of canned jokes, KoWit-24 provides wordplay contexts – each headline is accompanied by the news lead and summary. The most common type of wordplay in the dataset is the transformation of collocations, idioms, and named entities – the mechanism that has been underrepresented in previous humor datasets. Moreover the dataset contains manually created annotations that provide information about what the wordplay refers to. Incorporating this annotation into the dataset **enables automated evaluation** of the large language model’s wordplay interpretations.
|
| 32 |
|
| 33 |
+
### Dataset Entry Example
|
| 34 |
|
| 35 |
+
```
|
| 36 |
+
{'article_url': 'https://www.kommersant.ru/doc/5051268',
|
| 37 |
+
'date': '2021-10-27',
|
| 38 |
+
'headline': 'Диалектический пиломатериализм',
|
| 39 |
+
'is_wordplay': True,
|
| 40 |
+
'lead': 'Цены на фанеру и доски начали снижаться вслед за спросом',
|
| 41 |
+
'summary': 'Пиломатериалы и лесопромышленная продукция начинают дешеветь по '
|
| 42 |
+
'мере завершения строительного сезона. По мнению аналитиков и '
|
| 43 |
+
'некоторых участников рынка, этому способствует сокращение спроса '
|
| 44 |
+
'на фоне летнего всплеска цен. И хотя на некоторые продукты, '
|
| 45 |
+
'например OSB, цена упала уже на треть, она все еще вдвое выше '
|
| 46 |
+
'уровня конца прошлого года. До конца года можно ожидать '
|
| 47 |
+
'стабилизации цен, полагают участники рынка, но едва ли '
|
| 48 |
+
'возвращения к средним многолетним значениям.'},
|
| 49 |
+
'annotations': [{'end_index': 30,
|
| 50 |
+
'headline_substring': 'Диалектический пиломатериализм',
|
| 51 |
+
'reference_string': 'Диалектический материализм',
|
| 52 |
+
'reference_url': 'https://ru.wikipedia.org/wiki/Диалектический_материализм',
|
| 53 |
+
'start_index': 0,
|
| 54 |
+
'wordplay_type': 'Reference'},
|
| 55 |
+
{'end_index': 30,
|
| 56 |
+
'headline_substring': 'пиломатериализм',
|
| 57 |
+
'reference_string': ['материализм', 'пиломатериалы'],
|
| 58 |
+
'reference_url': ['', ''],
|
| 59 |
+
'start_index': 15,
|
| 60 |
+
'wordplay_type': 'Nonce word'}]
|
| 61 |
+
```
|
| 62 |
|
| 63 |
## Dataset Statistics
|
| 64 |
|
| 65 |
+
The distribution of headlines by wordplay type can be seen in Table 1 below. The most frequent wordplay mechanism in our dataset appeared to be the modification of existing well-known phrases – collocations, idiomatic expressions, or named entities.
|
| 66 |
+
|
| 67 |
+
| | Wordplay type | # | AAL | Links |
|
| 68 |
+
|-----------------|---------------------|-----|------|-------|
|
| 69 |
+
| Puns | Polysemy | 190 | 1.51 | |
|
| 70 |
+
| Puns | Homonymy | 26 | 1.57 | |
|
| 71 |
+
| Puns | Phonetic similarity | 98 | 1.80 | |
|
| 72 |
+
| Transformations | Collocation | 423 | 2.64 | 126 |
|
| 73 |
+
| Transformations | Idiom | 177 | 3.43 | 118 |
|
| 74 |
+
| Transformations | Reference | 353 | 3.73 | 214 |
|
| 75 |
+
| | Nonce word | 185 | | |
|
| 76 |
+
| | Oxymoron | 48 | | |
|
| 77 |
|
| 78 |
+
*Table 1. Wordplay types, average anchor length in words (AAL), and wiki links in KoWit-24*
|
| 79 |
|
| 80 |
## How to Use
|
| 81 |
|
| 82 |
+
You can integrate **KoWit-24** into your projects by loading it with the Hugging Face `datasets` library:
|
| 83 |
+
|
| 84 |
+
```python
|
| 85 |
+
from datasets import load_dataset
|
| 86 |
+
data_files = {"test": "dataset.csv", "dev": "dev_dataset.csv"}
|
| 87 |
+
dataset = load_dataset("Humor-Research/KoWit-24", data_files=data_files)
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
### Sample Usage: Automatic Interpretation Evaluation
|
| 91 |
+
|
| 92 |
+
To facilitate the evaluation of other large language models (LLMs) and to ensure the reproducibility of the current predictions and metrics, an evaluation function has been implemented. First, install the evaluation package:
|
| 93 |
+
|
| 94 |
+
```bash
|
| 95 |
+
pip install git+https://github.com/Humor-Research/kowit24_evaluation.git
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
Then, you can use the following Python snippet to evaluate interpretations:
|
| 99 |
+
|
| 100 |
+
```python
|
| 101 |
+
import numpy as np
|
| 102 |
+
from datasets import load_dataset
|
| 103 |
+
from kowit24_evaluation import check_interpretation
|
| 104 |
+
|
| 105 |
+
data_files = {"test": "dataset.csv", "dev": "dev_dataset.csv"}
|
| 106 |
+
dataset = load_dataset("Humor-Research/KoWit-24", data_files=data_files)
|
| 107 |
+
|
| 108 |
+
llm_interpretations = LLM() # To perform the evaluation of the interpretations, consider that the texts of the interpretations have already been received
|
| 109 |
+
|
| 110 |
+
results = list()
|
| 111 |
+
for idx, example in enumerate(dataset["test"]):
|
| 112 |
+
results.append(
|
| 113 |
+
check_interpretation(example["annotations"], llm_interpretations[idx])
|
| 114 |
+
)
|
| 115 |
+
|
| 116 |
+
print("Quality", np.mean(results))
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
### Sample Usage: Running an experiment with another LLM
|
| 120 |
+
|
| 121 |
+
To facilitate the evaluation of alternative large language models (LLMs) for detection and interpretation tasks, the prompts utilized in the experiments have been made available on the LangChain Hub, while the corresponding data have been deposited on the HuggingFace Hub.
|
| 122 |
+
|
| 123 |
+
Example:
|
| 124 |
+
```python
|
| 125 |
+
# Imports
|
| 126 |
+
from huggingface_hub import hf_hub_download
|
| 127 |
+
from datasets import load_dataset
|
| 128 |
+
from langchain.llms import LlamaCpp
|
| 129 |
+
from langchain.chains import LLMChain
|
| 130 |
+
from langchain import hub
|
| 131 |
+
|
| 132 |
+
# Load model
|
| 133 |
+
model_path = hf_hub_download(repo_id="Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF",
|
| 134 |
+
filename="Vikhr-Llama-3.2-1B-Q4_K_M.gguf",
|
| 135 |
+
local_dir=".")
|
| 136 |
+
|
| 137 |
+
llm = LlamaCpp(
|
| 138 |
+
model_path=model_path,
|
| 139 |
+
n_ctx=2048,
|
| 140 |
+
temperature=0.1,
|
| 141 |
+
top_p=0.9,
|
| 142 |
+
max_tokens=256
|
| 143 |
+
)
|
| 144 |
+
|
| 145 |
+
# Load prompt
|
| 146 |
+
prompt = hub.pull("humor-research/wordplay_detection")
|
| 147 |
+
|
| 148 |
+
# Load dataset
|
| 149 |
+
data_files = {"test": "dataset.csv", "dev": "dev_dataset.csv"}
|
| 150 |
+
dataset = load_dataset("Humor-Research/KoWit-24", data_files=data_files)
|
| 151 |
+
|
| 152 |
+
# Invoke LLM
|
| 153 |
+
predicted = list()
|
| 154 |
+
for example in dataset["test"]:
|
| 155 |
+
task = prompt.format(
|
| 156 |
+
headline=example["headline"],
|
| 157 |
+
lead=example["lead"]
|
| 158 |
+
)
|
| 159 |
+
predicted.append(
|
| 160 |
+
llm.invoke(task)
|
| 161 |
+
)
|
| 162 |
+
break
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
## Experiments and Results
|
| 166 |
+
|
| 167 |
+
For the experiments, we allocated 200 records (100 from each class) for the development set, making sure that all wordplay types were represented. Thus, the test set contains 2,500 headlines (1,290 with and 1,310 without wordplay). We experimented with two tasks – wordplay detection and wordplay interpretation. We employed five LLMs: GPT-4o, Mistral NeMo 12B, YandexGPT4, GigaChat Lite, and GigaChat Max.
|
| 168 |
+
|
| 169 |
+
### Table of Results
|
| 170 |
+
|
| 171 |
+
| Model | Detection with simple prompt, P/R | Detection with extended prompt, P/R | Interpretation manual, R | Interpretation auto, R |
|
| 172 |
+
|---------------|-----------------------------------|-------------------------------------|--------------------------|------------------------|
|
| 173 |
+
| GigaChat Lite | 0.50 / 0.50 | 0.53 / 0.72 | 0.11 | 0.19 |
|
| 174 |
+
| GigaChat Max | 0.62 / 0.48 | 0.68 / 0.59 | 0.28 | 0.28 |
|
| 175 |
+
| YandexGPT4 | 0.83 / 0.10 | 0.76 / 0.24 | 0.20 | 0.22 |
|
| 176 |
+
| Mistral Nemo | 0.00 / 0.00 | 0.00 / 0.00 | 0.24 | 0.30 |
|
| 177 |
+
| GPT-4o | 0.62 / 0.81 | 0.65 / 0.88 | 0.48 | 0.43 |
|
| 178 |
+
|
| 179 |
+
## Citation
|
| 180 |
+
|
| 181 |
+
If you use the KoWit-24 dataset in your research, please cite the following paper:
|
| 182 |
|
| 183 |
+
```bibtex
|
| 184 |
+
@misc{baranov2025kowit24richlyannotateddataset,
|
| 185 |
+
title={KoWit-24: A Richly Annotated Dataset of Wordplay in News Headlines},
|
| 186 |
+
author={Alexander Baranov and Anna Palatkina and Yulia Makovka and Pavel Braslavski},
|
| 187 |
+
year={2025},
|
| 188 |
+
eprint={2503.01510},
|
| 189 |
+
archivePrefix={arXiv},
|
| 190 |
+
primaryClass={cs.CL},
|
| 191 |
+
url={https://arxiv.org/abs/2503.01510},
|
| 192 |
+
}
|
| 193 |
+
```
|