Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
deeponh commited on
Commit
4e63b17
·
verified ·
1 Parent(s): a39d145

readme uploaded

Browse files
Files changed (1) hide show
  1. README.md +127 -23
README.md CHANGED
@@ -1,24 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: int64
6
- - name: type
7
- dtype: string
8
- - name: question
9
- dtype: string
10
- - name: answer
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 953086
15
- num_examples: 1737
16
- download_size: 161817
17
- dataset_size: 953086
18
- configs:
19
- - config_name: default
20
- data_files:
21
- - split: train
22
- path: data/train-*
23
- license: mit
24
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RiddleBench
2
+
3
+ Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language understanding and generation tasks.
4
+ However, their proficiency in complex logical and deductive reasoning remains a critical area of investigation.
5
+
6
+ We introduce **RiddleBench**, a meticulously curated benchmark of 1,737 challenging puzzles designed to test diverse reasoning skills beyond simple pattern matching.
7
+ Unlike conventional QA datasets that often rely on factual recall or surface-level cues, RiddleBench focuses on non-trivial reasoning by presenting problems such as coding–decoding, seating arrangements, sequence prediction, and blood relation analysis.
8
+
9
+ By evaluating models on RiddleBench, researchers can gain deeper insights into their ability to handle abstract reasoning, commonsense inference, and structured problem solving — skills essential for robust and trustworthy AI systems.
10
+
11
+
12
  ---
13
+
14
+ ## Dataset Structure
15
+
16
+ Each entry in the dataset consists of the following fields:
17
+
18
+ - `id`: Unique identifier for the riddle (1–1737)
19
+ - `type`: Category/type of riddle
20
+ - `question`: The riddle text
21
+ - `answer`: The ground-truth answer
22
+
23
+ The dataset can be directly loaded via Hugging Face Datasets.
24
+
25
+ ---
26
+
27
+ ## Type Distribution
28
+
29
+ The dataset covers four major categories of riddles:
30
+
31
+ | Type | Count |
32
+ |-------------------------|-------|
33
+ | Coding and Decoding Sum | 1037 |
34
+ | Seating Task | 432 |
35
+ | Sequence Task | 146 |
36
+ | Blood Relations | 122 |
37
+ | **Total** | 1737 |
38
+
39
+ ---
40
+
41
+ ## Example Entry
42
+
43
+ ```json
44
+ {
45
+ "id": 2,
46
+ "type": "coding and decoding sum",
47
+ "question": "If 'CARING' is coded as 'EDVGKC', and 'SHARES' is coded as 'UKEPBO', then how will 'CASKET' be coded as in the same code? a) EDXIBP c) EDWPAI b) EDWIAP d) EDWIBP",
48
+ "answer": "d"
49
+ }
50
+ ```
51
+
52
+ ## Loading the Dataset
53
+
54
+ You can load the dataset directly from Hugging Face:
55
+
56
+ ```python
57
+ from datasets import load_dataset
58
+
59
+ dataset = load_dataset("ai4bharat/RiddleBench")
60
+
61
+ print(dataset)
62
+ # Example access
63
+ print(dataset["train"][0])
64
+ ```
65
+
66
+ ## Load Datasets by Type
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # Load the dataset
72
+ dataset = load_dataset("ai4bharat/RiddleBench")["train"]
73
+
74
+ # Function to filter riddles by type
75
+ def get_riddles_by_type(riddle_type: str, n: int = 5):
76
+ """
77
+ Returns up to n riddles of the given type.
78
+
79
+ Args:
80
+ riddle_type (str): The type of riddles to filter (e.g., 'sequence task').
81
+ n (int): Number of riddles to return (default = 5).
82
+ """
83
+ filtered = [ex for ex in dataset if ex["type"].lower() == riddle_type.lower()]
84
+ return filtered[:n]
85
+
86
+ # Example usage
87
+ coding_riddles = get_riddles_by_type("coding and decoding sum", n=3)
88
+ seating_riddles = get_riddles_by_type("seating task", n=3)
89
+
90
+ print("Coding & Decoding Sum Examples:")
91
+ for r in coding_riddles:
92
+ print(f"Q: {r['question']} | A: {r['answer']}")
93
+
94
+ print("\nSeating Task Examples:")
95
+ for r in seating_riddles:
96
+ print(f"Q: {r['question']} | A: {r['answer']}")
97
+ ```
98
+
99
+ ## Intended Use
100
+
101
+ RiddleBench is designed solely as a benchmark to evaluate model reasoning abilities.
102
+ It should not be used for training or fine-tuning models intended for deployment in real-world applications.
103
+
104
+ ## Citation
105
+
106
+ If you use RiddleBench in your research, please cite the following:
107
+ ```
108
+ @misc{riddlebench2025,
109
+ title = {RiddleBench: A New General Inference and Reasoning Assessment in LLMs},
110
+ author = {Deepon Halder, Alan Saji, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan},
111
+ year = {2025},
112
+ howpublished = {\url{https://huggingface.co/datasets/ai4bharat/RiddleBench}}
113
+ }
114
+ ```
115
+
116
+ ## License
117
+
118
+ This dataset is released under the **MIT License**.
119
+ You are free to use, modify, and distribute this dataset with proper attribution.
120
+
121
+ ## Contact
122
+
123
+
124
+
125
+ For questions, collaborations, or contributions, please reach out to the maintainers:
126
+
127
+ - Email 1 : deeponh.2004@gmail.com
128
+ - Email 2 : prajdabre@gmail.com