File size: 10,513 Bytes
ab1299b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67c36cc
ab1299b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67c36cc
ab1299b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9441d32
ab1299b
 
cbe3f3a
 
 
 
 
 
 
 
 
 
 
ab1299b
9441d32
 
 
 
 
 
 
 
cbe3f3a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
---
language:
- tr
license: cc-by-4.0
task_categories:
- text-retrieval
- question-answering
pretty_name: Turkish Legal Özelge Corpus
size_categories:
- 10K<n<100K
tags:
- legal
- turkish
- özelge
- tax-law
- corpus
- retrieval
- IR
- information-retrieval
- beir
dataset_info:
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 120864700
    num_examples: 23587
  download_size: 49244406
  dataset_size: 120864700
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: int64
  splits:
  - name: train
    num_bytes: 9147664
    num_examples: 120364
  download_size: 4844361
  dataset_size: 9147664
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 47933248
    num_examples: 120364
  download_size: 21179422
  dataset_size: 47933248
configs:
- config_name: corpus
  data_files:
  - split: train
    path: corpus/train-*
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: queries
  data_files:
  - split: train
    path: queries/train-*
---

# Turkish Legal Özelge Corpus Dataset

## 📊 Dataset Summary

**Turkish Legal Özelge Corpus** is a comprehensive **Information Retrieval** dataset consisting of özelge (tax ruling) decisions published by the Turkish Revenue Administration (Gelir İdaresi Başkanlığı - GİB).

### Key Features

- **Format**: BEIR (Benchmarking IR) format with corpus-queries-qrels structure
- **Language**: Turkish 🇹🇷
- **Domain**: Tax Law, Administrative Law, Turkish Law
- **Source**: GİB Özelge Decisions
- **Use Cases**: Information retrieval, question-answering systems, RAG applications

---

## Dataset Structure

The dataset follows the **BEIR format** and consists of three main components:



### 2. **Queries** (Query Collection)
Legal information pieces extracted from 7 different perspectives for each document.


**7 Query Types:**
1. **Subject**: Main topic of the özelge
2. **Article Text**: Text of relevant law articles
3. **Communique Text**: Content of relevant communiques and circulars
4. **Regulation Text**: Regulation and legislation texts
5. **Justification Text**: Legal justifications
6. **Decision Text**: Administrative opinions and final decisions
7. **Condition Text**: Application conditions and requirements

## Tokenizer Benchmark & Data Filtering Summary

This process is not a training error and does not involve any training failure.
It is a data analysis and preprocessing step performed before model training.

We benchmarked seven tokenizers (MPNet, Qwen2, Gemma, XLM-R, BERT, Pretrained, T5) on all datasets to measure token lengths and identify extreme long-sequence outliers.
Among these, MPNetTokenizerFast generated the highest total token count, making it the most sensitive tokenizer for detecting unusually long samples.

Using MPNet as the reference tokenizer, we removed samples that exceeded the dataset-specific average by ~7000 tokens.
This filtering was applied independently to each dataset to ensure balanced sequence distributions and cleaner input data.

The number of removed and remaining samples is summarized in the table below.

| Tokenizer               | vocab_size | total_tokens   | avg_tokens | min_tokens | max_tokens | median_tokens |
|-------------------------|-----------:|----------------:|-----------:|-----------:|-----------:|---------------:|
| MPNetTokenizerFast      |     30,527 |   276,476,811   |      2,281 |        263 |     12,383 |          1,998 |
| Qwen2TokenizerFast      |    151,669 |   219,326,828   |      1,810 |        190 |      9,201 |          1,594 |
| GemmaTokenizerFast      |    262,144 |   183,710,411   |      1,516 |        158 |      7,578 |          1,341 |
| XLMRobertaTokenizerFast |    250,002 |   151,008,441   |      1,246 |        132 |      6,397 |          1,099 |
| BertTokenizerFast       |     32,000 |   127,503,718   |      1,052 |        103 |      5,386 |            931 |
| PretrainedTokenizerFast |     32,000 |   122,387,578   |      1,010 |        102 |      5,227 |            893 |
| T5TokenizerFast         |     32,128 |   121,315,289   |      1,001 |        100 |      5,238 |            885 |


<table width="100%">
  <tr>
    <td align="center" width="50%">
      <img 
        src="https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/2025-11-25-15.12.27.png"
        width="100%">
      <br>
      <em>Tokenizer / Total Token</em>
    </td>
    <td align="center" width="50%">
      <img 
        src="https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/2025-11-25-15.14.32.png"
        width="100%">
      <br>
      <em>Corr of Vocab Size – Total Token</em>
    </td>
  </tr>
</table>




| Dataset                                | max_tokens  | avg_tokens   | deleted_samples | total_samples |
|----------------------------------------|------------:|-------------:|----------------:|--------------:|
| `newmindai/regulation-retrieval`       | 276,476,811 | 2281.19945   |           1,300 |       121,300 |
| `newmindai/caselaw-retrieval`          |      1,386  | 2,281        |               0 |         1,386 |
| `newmindai/court-of-cassation-caselaw` |     30,527  | 186.4827586  |              11 |           272 |



### 3. **Default** (Relevance Matrix)
Relationship table showing which query belongs to which document.

| Field | Description |
|------|----------|
| `query-id` | Query identifier |
| `corpus-id` | Related document identifier |
| `score` | Relevance score (all 1) |



##  Dataset Statistics

```
Total Statistics:
├─  Corpus Records: 23,701 documents
├─  Query Records: 121,198 queries
└─  Relevance Records: 121,198 relations

Per Document:
├─ 1 corpus entry (full ruling text)
├─ 2–7 queries (legal perspectives)
└─ Average ~5.1 queries per document
```

### Field Coverage (Queries per Document)

On average, each özelge is represented by around **5.1 distinct queries**, corresponding to different legal fields. The distribution of populated query types per document is as follows:

- **2 query types**: ~0.1% of documents (e.g., Subject + Article Text)
- **3 query types**: ~12.3% of documents (e.g., Subject + Article Text + Decision Text)
- **4 query types**: ~26.2% of documents (e.g., Subject + Article Text + Communique Text + Decision Text)
- **5 query types**: ~23.9% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Decision Text)
- **6 query types**: ~12.6% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text)
- **7 query types**: ~24.9% of documents (All fields: Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text + Condition Text)

**Query Types Available:**
1. **Subject**: Main topic/issue of the ruling
2. **Article Text**: Relevant law article content
3. **Communique Text**: Official communique/circular content
4. **Regulation Text**: Regulation and legislation texts
5. **Justification Text**: Legal reasoning and justifications
6. **Decision Text**: Administrative opinion and final decision
7. **Condition Text**: Application conditions and requirements

In other words, roughly **61% of the corpus has 5 or more query types populated**, making them rich multi-perspective legal cases rather than shallow single-label examples.

![Queries per document distribution](https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/ozelge_queries_per_doc.png)

### Text Length Distribution

For **corpus texts** (original full özelge rulings with non-empty `ozelge_content`, currently 100 documents):

- **Mean length**: ~1,736 words  
- **Median (p50)**: ~1,658 words  
- **90th percentile (p90)**: ~2,393 words  

These are long, dense legal rulings, comparable to typical tax/administrative decisions with full reasoning and references.

For **query texts** (legal snippets extracted from seven perspectives across all 23k+ records):

- **Mean length**: ~41.6 words  
- **Median (p50)**: ~24 words  
- **90th percentile (p90)**: ~97 words  

This makes queries similar to short legal questions, issue statements, justifications or excerpts from statutes/communiques, while the associated corpus entries provide the full ruling context for the subset of records where the full original özelge text is available.

![Corpus vs. query text length histograms](https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/ozelge_text_length_hist.png)

## Use Cases

### 1. **Information Retrieval Systems**
- Training for semantic search models
- Dense retrieval systems (DPR, ANCE, ColBERT)
- Sparse retrieval systems (BM25, TF-IDF) benchmark

### 2. **RAG (Retrieval-Augmented Generation) Applications**
- Legal chatbots
- Tax consultation assistants
- Automatic özelge analysis systems

### 3. **Question-Answering Systems**
- Legal QA models
- Extractive and abstractive QA
- Multi-hop reasoning

### 4. 📊 **Model Evaluation**
- Benchmarking Turkish IR models
- Retrieval performance analysis
- Domain adaptation studies

---

## Data Collection and Processing

###  Data Source
The data is sourced from **official özelge decisions of the Turkish Revenue Administration**. Each özelge:
- Responds to specific questions asked by taxpayers
- References relevant legislation, communiques, and regulations
- Contains the Administration's opinion for concrete applications


## Ethics and Legal Notices

### License
This dataset is published under **CC-BY 4.0** license. Please cite the source when using.

---

## Citation

```bibtex
@article{mecellem2026,
  title={Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal Domain},
  author={Uğur, Özgür and Göksu, Mahmut and Çimen, Mahmut and Yılmaz, Musa and Şavirdi, Esra and Demir, Alp Talha and Güllüce, Rumeysa and Çetin, İclal and Sağbaş, Ömer Can},
  journal={arXiv preprint arXiv:2601.16018},
  year={2026},
  month={January},
  url={https://arxiv.org/abs/2601.16018},
  doi={10.48550/arXiv.2601.16018},
  eprint={2601.16018},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```

## License

This dataset is released under the Apache 2.0 License.

## Contact

For questions: [info@newmind.ai](mailto:info@newmind.ai)