|
|
--- |
|
|
language: |
|
|
- tr |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- text-retrieval |
|
|
- question-answering |
|
|
pretty_name: Turkish Legal Özelge Corpus |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
tags: |
|
|
- legal |
|
|
- turkish |
|
|
- özelge |
|
|
- tax-law |
|
|
- corpus |
|
|
- retrieval |
|
|
- IR |
|
|
- information-retrieval |
|
|
- beir |
|
|
dataset_info: |
|
|
- config_name: corpus |
|
|
features: |
|
|
- name: _id |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 120864700 |
|
|
num_examples: 23587 |
|
|
download_size: 49244406 |
|
|
dataset_size: 120864700 |
|
|
- config_name: default |
|
|
features: |
|
|
- name: query-id |
|
|
dtype: string |
|
|
- name: corpus-id |
|
|
dtype: string |
|
|
- name: score |
|
|
dtype: int64 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 9147664 |
|
|
num_examples: 120364 |
|
|
download_size: 4844361 |
|
|
dataset_size: 9147664 |
|
|
- config_name: queries |
|
|
features: |
|
|
- name: _id |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 47933248 |
|
|
num_examples: 120364 |
|
|
download_size: 21179422 |
|
|
dataset_size: 47933248 |
|
|
configs: |
|
|
- config_name: corpus |
|
|
data_files: |
|
|
- split: train |
|
|
path: corpus/train-* |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- config_name: queries |
|
|
data_files: |
|
|
- split: train |
|
|
path: queries/train-* |
|
|
--- |
|
|
|
|
|
# Turkish Legal Özelge Corpus Dataset |
|
|
|
|
|
## 📊 Dataset Summary |
|
|
|
|
|
**Turkish Legal Özelge Corpus** is a comprehensive **Information Retrieval** dataset consisting of özelge (tax ruling) decisions published by the Turkish Revenue Administration (Gelir İdaresi Başkanlığı - GİB). |
|
|
|
|
|
### Key Features |
|
|
|
|
|
- **Format**: BEIR (Benchmarking IR) format with corpus-queries-qrels structure |
|
|
- **Language**: Turkish 🇹🇷 |
|
|
- **Domain**: Tax Law, Administrative Law, Turkish Law |
|
|
- **Source**: GİB Özelge Decisions |
|
|
- **Use Cases**: Information retrieval, question-answering systems, RAG applications |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset follows the **BEIR format** and consists of three main components: |
|
|
|
|
|
|
|
|
|
|
|
### 2. **Queries** (Query Collection) |
|
|
Legal information pieces extracted from 7 different perspectives for each document. |
|
|
|
|
|
|
|
|
**7 Query Types:** |
|
|
1. **Subject**: Main topic of the özelge |
|
|
2. **Article Text**: Text of relevant law articles |
|
|
3. **Communique Text**: Content of relevant communiques and circulars |
|
|
4. **Regulation Text**: Regulation and legislation texts |
|
|
5. **Justification Text**: Legal justifications |
|
|
6. **Decision Text**: Administrative opinions and final decisions |
|
|
7. **Condition Text**: Application conditions and requirements |
|
|
|
|
|
## Tokenizer Benchmark & Data Filtering Summary |
|
|
|
|
|
This process is not a training error and does not involve any training failure. |
|
|
It is a data analysis and preprocessing step performed before model training. |
|
|
|
|
|
We benchmarked seven tokenizers (MPNet, Qwen2, Gemma, XLM-R, BERT, Pretrained, T5) on all datasets to measure token lengths and identify extreme long-sequence outliers. |
|
|
Among these, MPNetTokenizerFast generated the highest total token count, making it the most sensitive tokenizer for detecting unusually long samples. |
|
|
|
|
|
Using MPNet as the reference tokenizer, we removed samples that exceeded the dataset-specific average by ~7000 tokens. |
|
|
This filtering was applied independently to each dataset to ensure balanced sequence distributions and cleaner input data. |
|
|
|
|
|
The number of removed and remaining samples is summarized in the table below. |
|
|
|
|
|
| Tokenizer | vocab_size | total_tokens | avg_tokens | min_tokens | max_tokens | median_tokens | |
|
|
|-------------------------|-----------:|----------------:|-----------:|-----------:|-----------:|---------------:| |
|
|
| MPNetTokenizerFast | 30,527 | 276,476,811 | 2,281 | 263 | 12,383 | 1,998 | |
|
|
| Qwen2TokenizerFast | 151,669 | 219,326,828 | 1,810 | 190 | 9,201 | 1,594 | |
|
|
| GemmaTokenizerFast | 262,144 | 183,710,411 | 1,516 | 158 | 7,578 | 1,341 | |
|
|
| XLMRobertaTokenizerFast | 250,002 | 151,008,441 | 1,246 | 132 | 6,397 | 1,099 | |
|
|
| BertTokenizerFast | 32,000 | 127,503,718 | 1,052 | 103 | 5,386 | 931 | |
|
|
| PretrainedTokenizerFast | 32,000 | 122,387,578 | 1,010 | 102 | 5,227 | 893 | |
|
|
| T5TokenizerFast | 32,128 | 121,315,289 | 1,001 | 100 | 5,238 | 885 | |
|
|
|
|
|
|
|
|
<table width="100%"> |
|
|
<tr> |
|
|
<td align="center" width="50%"> |
|
|
<img |
|
|
src="https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/2025-11-25-15.12.27.png" |
|
|
width="100%"> |
|
|
<br> |
|
|
<em>Tokenizer / Total Token</em> |
|
|
</td> |
|
|
<td align="center" width="50%"> |
|
|
<img |
|
|
src="https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/2025-11-25-15.14.32.png" |
|
|
width="100%"> |
|
|
<br> |
|
|
<em>Corr of Vocab Size – Total Token</em> |
|
|
</td> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Dataset | max_tokens | avg_tokens | deleted_samples | total_samples | |
|
|
|----------------------------------------|------------:|-------------:|----------------:|--------------:| |
|
|
| `newmindai/regulation-retrieval` | 276,476,811 | 2281.19945 | 1,300 | 121,300 | |
|
|
| `newmindai/caselaw-retrieval` | 1,386 | 2,281 | 0 | 1,386 | |
|
|
| `newmindai/court-of-cassation-caselaw` | 30,527 | 186.4827586 | 11 | 272 | |
|
|
|
|
|
|
|
|
|
|
|
### 3. **Default** (Relevance Matrix) |
|
|
Relationship table showing which query belongs to which document. |
|
|
|
|
|
| Field | Description | |
|
|
|------|----------| |
|
|
| `query-id` | Query identifier | |
|
|
| `corpus-id` | Related document identifier | |
|
|
| `score` | Relevance score (all 1) | |
|
|
|
|
|
|
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
``` |
|
|
Total Statistics: |
|
|
├─ Corpus Records: 23,701 documents |
|
|
├─ Query Records: 121,198 queries |
|
|
└─ Relevance Records: 121,198 relations |
|
|
|
|
|
Per Document: |
|
|
├─ 1 corpus entry (full ruling text) |
|
|
├─ 2–7 queries (legal perspectives) |
|
|
└─ Average ~5.1 queries per document |
|
|
``` |
|
|
|
|
|
### Field Coverage (Queries per Document) |
|
|
|
|
|
On average, each özelge is represented by around **5.1 distinct queries**, corresponding to different legal fields. The distribution of populated query types per document is as follows: |
|
|
|
|
|
- **2 query types**: ~0.1% of documents (e.g., Subject + Article Text) |
|
|
- **3 query types**: ~12.3% of documents (e.g., Subject + Article Text + Decision Text) |
|
|
- **4 query types**: ~26.2% of documents (e.g., Subject + Article Text + Communique Text + Decision Text) |
|
|
- **5 query types**: ~23.9% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Decision Text) |
|
|
- **6 query types**: ~12.6% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text) |
|
|
- **7 query types**: ~24.9% of documents (All fields: Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text + Condition Text) |
|
|
|
|
|
**Query Types Available:** |
|
|
1. **Subject**: Main topic/issue of the ruling |
|
|
2. **Article Text**: Relevant law article content |
|
|
3. **Communique Text**: Official communique/circular content |
|
|
4. **Regulation Text**: Regulation and legislation texts |
|
|
5. **Justification Text**: Legal reasoning and justifications |
|
|
6. **Decision Text**: Administrative opinion and final decision |
|
|
7. **Condition Text**: Application conditions and requirements |
|
|
|
|
|
In other words, roughly **61% of the corpus has 5 or more query types populated**, making them rich multi-perspective legal cases rather than shallow single-label examples. |
|
|
|
|
|
 |
|
|
|
|
|
### Text Length Distribution |
|
|
|
|
|
For **corpus texts** (original full özelge rulings with non-empty `ozelge_content`, currently 100 documents): |
|
|
|
|
|
- **Mean length**: ~1,736 words |
|
|
- **Median (p50)**: ~1,658 words |
|
|
- **90th percentile (p90)**: ~2,393 words |
|
|
|
|
|
These are long, dense legal rulings, comparable to typical tax/administrative decisions with full reasoning and references. |
|
|
|
|
|
For **query texts** (legal snippets extracted from seven perspectives across all 23k+ records): |
|
|
|
|
|
- **Mean length**: ~41.6 words |
|
|
- **Median (p50)**: ~24 words |
|
|
- **90th percentile (p90)**: ~97 words |
|
|
|
|
|
This makes queries similar to short legal questions, issue statements, justifications or excerpts from statutes/communiques, while the associated corpus entries provide the full ruling context for the subset of records where the full original özelge text is available. |
|
|
|
|
|
 |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
### 1. **Information Retrieval Systems** |
|
|
- Training for semantic search models |
|
|
- Dense retrieval systems (DPR, ANCE, ColBERT) |
|
|
- Sparse retrieval systems (BM25, TF-IDF) benchmark |
|
|
|
|
|
### 2. **RAG (Retrieval-Augmented Generation) Applications** |
|
|
- Legal chatbots |
|
|
- Tax consultation assistants |
|
|
- Automatic özelge analysis systems |
|
|
|
|
|
### 3. **Question-Answering Systems** |
|
|
- Legal QA models |
|
|
- Extractive and abstractive QA |
|
|
- Multi-hop reasoning |
|
|
|
|
|
### 4. 📊 **Model Evaluation** |
|
|
- Benchmarking Turkish IR models |
|
|
- Retrieval performance analysis |
|
|
- Domain adaptation studies |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Collection and Processing |
|
|
|
|
|
### Data Source |
|
|
The data is sourced from **official özelge decisions of the Turkish Revenue Administration**. Each özelge: |
|
|
- Responds to specific questions asked by taxpayers |
|
|
- References relevant legislation, communiques, and regulations |
|
|
- Contains the Administration's opinion for concrete applications |
|
|
|
|
|
|
|
|
## Ethics and Legal Notices |
|
|
|
|
|
### License |
|
|
This dataset is published under **CC-BY 4.0** license. Please cite the source when using. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@article{mecellem2026, |
|
|
title={Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal Domain}, |
|
|
author={Uğur, Özgür and Göksu, Mahmut and Çimen, Mahmut and Yılmaz, Musa and Şavirdi, Esra and Demir, Alp Talha and Güllüce, Rumeysa and Çetin, İclal and Sağbaş, Ömer Can}, |
|
|
journal={arXiv preprint arXiv:2601.16018}, |
|
|
year={2026}, |
|
|
month={January}, |
|
|
url={https://arxiv.org/abs/2601.16018}, |
|
|
doi={10.48550/arXiv.2601.16018}, |
|
|
eprint={2601.16018}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is released under the Apache 2.0 License. |
|
|
|
|
|
## Contact |
|
|
|
|
|
For questions: [info@newmind.ai](mailto:info@newmind.ai) |
|
|
|