--- language: - tr license: cc-by-4.0 task_categories: - text-retrieval - question-answering pretty_name: Turkish Legal Özelge Corpus size_categories: - 10K
Tokenizer / Total Token
Corr of Vocab Size – Total Token | Dataset | max_tokens | avg_tokens | deleted_samples | total_samples | |----------------------------------------|------------:|-------------:|----------------:|--------------:| | `newmindai/regulation-retrieval` | 276,476,811 | 2281.19945 | 1,300 | 121,300 | | `newmindai/caselaw-retrieval` | 1,386 | 2,281 | 0 | 1,386 | | `newmindai/court-of-cassation-caselaw` | 30,527 | 186.4827586 | 11 | 272 | ### 3. **Default** (Relevance Matrix) Relationship table showing which query belongs to which document. | Field | Description | |------|----------| | `query-id` | Query identifier | | `corpus-id` | Related document identifier | | `score` | Relevance score (all 1) | ## Dataset Statistics ``` Total Statistics: ├─ Corpus Records: 23,701 documents ├─ Query Records: 121,198 queries └─ Relevance Records: 121,198 relations Per Document: ├─ 1 corpus entry (full ruling text) ├─ 2–7 queries (legal perspectives) └─ Average ~5.1 queries per document ``` ### Field Coverage (Queries per Document) On average, each özelge is represented by around **5.1 distinct queries**, corresponding to different legal fields. The distribution of populated query types per document is as follows: - **2 query types**: ~0.1% of documents (e.g., Subject + Article Text) - **3 query types**: ~12.3% of documents (e.g., Subject + Article Text + Decision Text) - **4 query types**: ~26.2% of documents (e.g., Subject + Article Text + Communique Text + Decision Text) - **5 query types**: ~23.9% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Decision Text) - **6 query types**: ~12.6% of documents (e.g., Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text) - **7 query types**: ~24.9% of documents (All fields: Subject + Article Text + Communique Text + Regulation Text + Justification Text + Decision Text + Condition Text) **Query Types Available:** 1. **Subject**: Main topic/issue of the ruling 2. **Article Text**: Relevant law article content 3. **Communique Text**: Official communique/circular content 4. **Regulation Text**: Regulation and legislation texts 5. **Justification Text**: Legal reasoning and justifications 6. **Decision Text**: Administrative opinion and final decision 7. **Condition Text**: Application conditions and requirements In other words, roughly **61% of the corpus has 5 or more query types populated**, making them rich multi-perspective legal cases rather than shallow single-label examples. ![Queries per document distribution](https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/ozelge_queries_per_doc.png) ### Text Length Distribution For **corpus texts** (original full özelge rulings with non-empty `ozelge_content`, currently 100 documents): - **Mean length**: ~1,736 words - **Median (p50)**: ~1,658 words - **90th percentile (p90)**: ~2,393 words These are long, dense legal rulings, comparable to typical tax/administrative decisions with full reasoning and references. For **query texts** (legal snippets extracted from seven perspectives across all 23k+ records): - **Mean length**: ~41.6 words - **Median (p50)**: ~24 words - **90th percentile (p90)**: ~97 words This makes queries similar to short legal questions, issue statements, justifications or excerpts from statutes/communiques, while the associated corpus entries provide the full ruling context for the subset of records where the full original özelge text is available. ![Corpus vs. query text length histograms](https://huggingface.co/datasets/newmindai/regulation-retrieval/resolve/main/ozelge_text_length_hist.png) ## Use Cases ### 1. **Information Retrieval Systems** - Training for semantic search models - Dense retrieval systems (DPR, ANCE, ColBERT) - Sparse retrieval systems (BM25, TF-IDF) benchmark ### 2. **RAG (Retrieval-Augmented Generation) Applications** - Legal chatbots - Tax consultation assistants - Automatic özelge analysis systems ### 3. **Question-Answering Systems** - Legal QA models - Extractive and abstractive QA - Multi-hop reasoning ### 4. 📊 **Model Evaluation** - Benchmarking Turkish IR models - Retrieval performance analysis - Domain adaptation studies --- ## Data Collection and Processing ### Data Source The data is sourced from **official özelge decisions of the Turkish Revenue Administration**. Each özelge: - Responds to specific questions asked by taxpayers - References relevant legislation, communiques, and regulations - Contains the Administration's opinion for concrete applications ## Ethics and Legal Notices ### License This dataset is published under **CC-BY 4.0** license. Please cite the source when using. --- ## Citation ```bibtex @article{mecellem2026, title={Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal Domain}, author={Uğur, Özgür and Göksu, Mahmut and Çimen, Mahmut and Yılmaz, Musa and Şavirdi, Esra and Demir, Alp Talha and Güllüce, Rumeysa and Çetin, İclal and Sağbaş, Ömer Can}, journal={arXiv preprint arXiv:2601.16018}, year={2026}, month={January}, url={https://arxiv.org/abs/2601.16018}, doi={10.48550/arXiv.2601.16018}, eprint={2601.16018}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License This dataset is released under the Apache 2.0 License. ## Contact For questions: [info@newmind.ai](mailto:info@newmind.ai)