pretty_name: Vinclat
language:
- ca
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
size_categories:
- n<1K
annotations_creators:
- expert-generated
language_creators:
- expert-generated
tags:
- benchmark
- evaluation
- catalan
- catalonia
- game
- riddles
dataset_info:
dataset_size: 1000
splits:
- name: train
num_examples: 1000
features:
- name: id
dtype: int32
- name: hint_1
dtype: string
- name: hint_2
dtype: string
- name: hint_3
dtype: string
- name: hint_4
dtype: string
- name: keyword_1
dtype: string
- name: keyword_2
dtype: string
- name: keyword_3
dtype: string
- name: keyword_4
dtype: string
- name: solution_words_len
sequence: int64
- name: solution_pattern
dtype: string
Dataset Card for Vinclat
Vinclat is a Catalan-language dataset for multi-step problem solving. It employs a game-based structure to evaluate both the reasoning capabilities and cultural knowledge of large language models.
- Language(s): Catalan
- Paper: ACL Anthology
- Leaderboard: HF Space
- Code: GitHub
- License: CC-BY-4.0
- Funded by: Projecte Aina
- Curated by: Barcelona Supercomputing Center
- Shared by: Barcelona Supercomputing Center
Dataset Description
Vinclat is a Catalan-language evaluation dataset designed to assess the reasoning capabilities and cultural knowledge of large language models (LLMs). It contains 1,000 instance meticulously crafted and reviewed by human annotators. Each instance follows a game-based structure in which models must solve a complex riddle through a multi-step reasoning process.
Given four independent clues, models are supposed to infer intermediate concepts which, despite being seemingly unrelated, can be creatively connected to reach a final solution. Successfully solving the tasks requires a blend of reasoning strategies and linguistic understanding, combining logical inference with knowledge of Catalan language and culture.
To preserve the long-term validity of the benchmark, the dataset release does not provide the final solutions. Instead, an external leaderboard is maintained by the authors, where state-of-the-art models are evaluated and added over time. This approach helps prevent benchmark contamination and ensures that the dataset remains a reliable evaluation resource. Researchers who would like their models to be included in the leaderboard are encouraged to contact the authors.
Dataset Structure
Data Fields
id(int): Unique ID assigned to each instance.hint_1(str): First hint.hint_2(str): Second hint.hint_3(str): Third hint.hint_4(str): Fourth hint.keyword_1(str): Concept associated to the first hint.keyword_2(str): Concept associated to the second hint.keyword_3(str): Concept associated to the third hint.keyword_4(str): Concept associated to the fourth hint.solution_words_len(list[int]): A list indicating the number of letters in each word of the solution, in order.solution_pattern(str): A visual pattern of the solution using underscores to represent each letter, with spaces separating words.
Data instances
{
'id': 1,
'hint_1': 'El sufix de la majoria d’adverbis',
'hint_2': 'La platja més surfista de Cadis',
'hint_3': 'De Shock o de parella',
'hint_4': 'La filla de la paciència',
'keyword_1': 'Ment',
'keyword_2': 'Tarifa',
'keyword_3': 'Teràpia',
'keyword_4': 'Ciència',
'solution_words_len': [10],
'solution_pattern': "__________",
}
Prompt template
The hints and keywords from the example above are injected in prompts such as:
Let’s play a game in Catalan! Your goal is to find a "solution" word or words of a specific length, which will be given to you. You will also receive four numbered hints. Try to solve each one to get a "hint word", which doesn’t need to match the solution’s length. Then, try to think about the common theme or connection between the "hint words" that you found. The final solution should fit the required letter count and is related to the "hint words" you identified.
It is important to note that you don’t necessary need all the "hint words" to get to the final solution. If you’re struggling with a hint or suspect your guess might be wrong, it’s often better to focus on the "hint words" you are sure about. A wrong one can send you down the wrong path!
Here are your hints:
1. {hint_1}
2. {hint_2}
3. {hint_3}
4. {hint_4}
The "solution" should fit here: {solution_pattern}. What’s your guess? Return a JSON object with the following fields: ’hint_word_1’, ’hint_word_2’, ’hint_word_3’, ’hint_word_4’, ’solution’. If you could not find the word associated to some hint, simply keep that field as ’unknown’.
Refer to Appendix A of the paper for a comprehensive overview of the prompt formats employed.
Dataset Creation
Curation Rationale
The dataset was curated to evaluate multi-step reasoning and deeply rooted cultural knowledge; capabilities that are rarely, if ever, assessed by existing benchmarks.
Each instance is carefully designed as a game-based riddle, requiring models to connect seemingly unrelated clues through reasoning and cultural understanding. The source data producers held collaborative sessions to develop the puzzles, always aiming to:
- ensure topical diversity, avoiding repetition across instances,
- vary difficulty levels by including at least one relatively easy clue to support solvability,
- maintain linguistic and semantic coherence while encouraging creative associations.
Then, our human annotators manually reviewed the original dataset to ensure that all instances can be interpreted in isolation, enhancing the fairness and reproducibility of the task. During this curation process, we also revised instances that were time-dependent or game-specific, as these relied on information unavailable to both LLMs and human annotators.
Please refer to the paper for further details.
Who are the source data producers?
The dataset was created by the original authors of the Vinclat game, three native Catalan speakers who released a new instance per day on the official website (no longer available).
Who are the annotators?
Members of BSC's Annotation Team. All annotators were native Catalan speakers, born and raised in Catalonia, and held university degrees, ensuring both linguistic proficiency and cultural familiarity with the game and its underlying associations. Within the team, some annotators curated and crafted the dataset instances, while others established a human baseline and carried out a qualitative analysis of model responses.
Personal and Sensitive Information
No personal or sensitive information is included in the dataset.
Bias, Risks, and Limitations
The dataset is not expected to introduce social, cultural, or demographic biases and poses no known risks.
Additional information
Funding
This work has been promoted and financed by:
- Generalitat de Catalunya through the Aina project.
- Ministerio para la Transformación Digital y de la Función Pública - funded by the EU through Next GenerationEU – within the framework of the project Desarrollo de Modelos ALIA.
License
This work is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Citation
@inproceedings{pamies2026vinclat,
title={Vinclat: Evaluating Reasoning, Cognition and Culture in One Game},
author={Pamies, Marc and Aula-Blasco, Javier and Gonzalez-Agirre, Aitor and Villegas, Marta},
booktitle={Proceedings of the First Workshop on Multilingual Multicultural Evaluation},
pages={49--66},
year={2026}
}
Dataset Card Author
- Marc Pàmies Massip