File size: 8,359 Bytes
05e94be e2e0a68 8d36ff9 e2e0a68 05e94be 731897a 05e94be d1b2088 37aee46 d447010 4635150 d1b2088 4635150 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 731897a e2e0a68 731897a e2e0a68 731897a e2e0a68 731897a 37aee46 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 37aee46 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 731897a 05e94be 731897a 1036c87 05e94be 731897a 05e94be 731897a 37aee46 731897a 37aee46 731897a 05e94be 7d4d885 731897a 05e94be 731897a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | ---
pretty_name: Vinclat
language:
- ca
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
size_categories:
- n<1K
annotations_creators:
- expert-generated
language_creators:
- expert-generated
tags:
- benchmark
- evaluation
- catalan
- catalonia
- game
- riddles
dataset_info:
dataset_size: 1000
splits:
- name: train
num_examples: 1000
features:
- name: id
dtype: int32
- name: hint_1
dtype: string
- name: hint_2
dtype: string
- name: hint_3
dtype: string
- name: hint_4
dtype: string
- name: keyword_1
dtype: string
- name: keyword_2
dtype: string
- name: keyword_3
dtype: string
- name: keyword_4
dtype: string
- name: solution_words_len
sequence: int64
- name: solution_pattern
dtype: string
---
# Dataset Card for Vinclat
**Vinclat** is a Catalan-language dataset for multi-step problem solving. It employs a game-based structure to evaluate both the reasoning capabilities and cultural knowledge of large language models.
- **Language(s):** Catalan
- **Paper:** [ACL Anthology](https://aclanthology.org/2026.mme-main.4/)
- **Leaderboard:** [HF Space](https://huggingface.co/spaces/projecte-aina/vinclat-leaderboard)
- **Code:** [GitHub](https://github.com/mapama247/vinclat_eval)
- **License:** [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en)
- **Funded by:** [Projecte Aina](https://projecteaina.cat/en/)
- **Curated by:** Barcelona Supercomputing Center
- **Shared by:** Barcelona Supercomputing Center
## Dataset Description
**Vinclat** is a Catalan-language evaluation dataset designed to assess the reasoning capabilities and cultural knowledge of large language models (LLMs). It contains 1,000 instance meticulously crafted and reviewed by human annotators. Each instance follows a game-based structure in which models must solve a complex riddle through a multi-step reasoning process.
Given four independent clues, models are supposed to infer intermediate concepts which, despite being seemingly unrelated, can be creatively connected to reach a final solution. Successfully solving the tasks requires a blend of reasoning strategies and linguistic understanding, combining logical inference with knowledge of Catalan language and culture.
To preserve the long-term validity of the benchmark, the dataset release does not provide the final solutions. Instead, an external leaderboard is maintained by the authors, where state-of-the-art models are evaluated and added over time. This approach helps prevent benchmark contamination and ensures that the dataset remains a reliable evaluation resource. Researchers who would like their models to be included in the leaderboard are encouraged to contact the authors.
## Dataset Structure
### Data Fields
- `id` (int): Unique ID assigned to each instance.
- `hint_1` (str): First hint.
- `hint_2` (str): Second hint.
- `hint_3` (str): Third hint.
- `hint_4` (str): Fourth hint.
- `keyword_1` (str): Concept associated to the first hint.
- `keyword_2` (str): Concept associated to the second hint.
- `keyword_3` (str): Concept associated to the third hint.
- `keyword_4` (str): Concept associated to the fourth hint.
- `solution_words_len` (list[int]): A list indicating the number of letters in each word of the solution, in order.
- `solution_pattern` (str): A visual pattern of the solution using underscores to represent each letter, with spaces separating words.
### Data instances
```
{
'id': 1,
'hint_1': 'El sufix de la majoria d’adverbis',
'hint_2': 'La platja més surfista de Cadis',
'hint_3': 'De Shock o de parella',
'hint_4': 'La filla de la paciència',
'keyword_1': 'Ment',
'keyword_2': 'Tarifa',
'keyword_3': 'Teràpia',
'keyword_4': 'Ciència',
'solution_words_len': [10],
'solution_pattern': "__________",
}
```
### Prompt template
The `hints` and `keywords` from the example above are injected in prompts such as:
```
Let’s play a game in Catalan! Your goal is to find a "solution" word or words of a specific length, which will be given to you. You will also receive four numbered hints. Try to solve each one to get a "hint word", which doesn’t need to match the solution’s length. Then, try to think about the common theme or connection between the "hint words" that you found. The final solution should fit the required letter count and is related to the "hint words" you identified.
It is important to note that you don’t necessary need all the "hint words" to get to the final solution. If you’re struggling with a hint or suspect your guess might be wrong, it’s often better to focus on the "hint words" you are sure about. A wrong one can send you down the wrong path!
Here are your hints:
1. {hint_1}
2. {hint_2}
3. {hint_3}
4. {hint_4}
The "solution" should fit here: {solution_pattern}. What’s your guess? Return a JSON object with the following fields: ’hint_word_1’, ’hint_word_2’, ’hint_word_3’, ’hint_word_4’, ’solution’. If you could not find the word associated to some hint, simply keep that field as ’unknown’.
```
Refer to Appendix A of the [paper](https://aclanthology.org/2026.mme-main.4/) for a comprehensive overview of the prompt formats employed.
## Dataset Creation
### Curation Rationale
The dataset was curated to evaluate multi-step reasoning and deeply rooted cultural knowledge; capabilities that are rarely, if ever, assessed by existing benchmarks.
Each instance is carefully designed as a game-based riddle, requiring models to connect seemingly unrelated clues through reasoning and cultural understanding. The source data producers held collaborative sessions to develop the puzzles, always aiming to:
- ensure topical diversity, avoiding repetition across instances,
- vary difficulty levels by including at least one relatively easy clue to support solvability,
- maintain linguistic and semantic coherence while encouraging creative associations.
Then, our human annotators manually reviewed the original dataset to ensure that all instances can be interpreted in isolation, enhancing the fairness and reproducibility of the task. During this curation process, we also revised instances that were time-dependent or game-specific, as these relied on information unavailable to both LLMs and human annotators.
Please refer to the [paper](https://aclanthology.org/2026.mme-main.4/) for further details.
### Who are the source data producers?
The dataset was created by the original authors of the Vinclat game, three native Catalan speakers who released a new instance per day on the [official website](https://vinclat.cat/) (no longer available).
### Who are the annotators?
Members of BSC's Annotation Team. All annotators were native Catalan speakers, born and raised in Catalonia, and held university degrees, ensuring both linguistic proficiency and cultural familiarity with the game and its underlying associations. Within the team, some annotators curated and crafted the dataset instances, while others established a human baseline and carried out a qualitative analysis of model responses.
### Personal and Sensitive Information
No personal or sensitive information is included in the dataset.
### Bias, Risks, and Limitations
The dataset is not expected to introduce social, cultural, or demographic biases and poses no known risks.
## Additional information
### Funding
This work has been promoted and financed by:
- Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
- Ministerio para la Transformación Digital y de la Función Pública - funded by the EU through Next GenerationEU – within the framework of the project [Desarrollo de Modelos ALIA](https://alia.gob.es/eng).
### License
This work is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation
```bibtext
@inproceedings{pamies2026vinclat,
title={Vinclat: Evaluating Reasoning, Cognition and Culture in One Game},
author={Pamies, Marc and Aula-Blasco, Javier and Gonzalez-Agirre, Aitor and Villegas, Marta},
booktitle={Proceedings of the First Workshop on Multilingual Multicultural Evaluation},
pages={49--66},
year={2026}
}
```
### Dataset Card Author
- Marc Pàmies Massip
### Dataset Card Contact
<mpamies@bsc.es> |