| | --- |
| | language: |
| | - tr |
| | license: mit |
| | task_categories: |
| | - text-generation |
| | - question-answering |
| | tags: |
| | - turkish |
| | - identity |
| | - instruction-tuning |
| | - llm-alignment |
| | - nlp |
| | - chatbot |
| | pretty_name: TurkishIdentityMini |
| | size_categories: |
| | - n<1K |
| | dataset_info: |
| | features: |
| | - name: instruction |
| | dtype: string |
| | - name: output |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 41540 |
| | num_examples: 481 |
| | download_size: 13573 |
| | dataset_size: 41540 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | --- |
| | |
| | # TurkishIdentityMini |
| |
|
| | ## Dataset Description |
| |
|
| | **TurkishIdentityMini** is a small, template-based Turkish instruction dataset designed to help LLMs respond correctly to identity-related questions. It contains instruction–output pairs where a user asks a chatbot about its name, origin, or creator, and the model responds using customizable `{{model_name}}` and `{{team_name}}` placeholders. |
| |
|
| | This dataset is useful for fine-tuning or instruction-tuning Turkish language models to maintain a consistent, branded identity and to correctly deny affiliation with other AI providers such as OpenAI, Google, Meta, Anthropic, or Microsoft. |
| |
|
| | ### Dataset Summary |
| |
|
| | | Property | Value | |
| | |---|---| |
| | | Language | Turkish (`tr`) | |
| | | Split | `train` only | |
| | | Format | Parquet | |
| | | License | MIT | |
| |
|
| | --- |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Fields |
| |
|
| | | Field | Type | Description | |
| | |---|---|---| |
| | | `instruction` | `string` | A Turkish user query about the model's identity (e.g., *"Sen kimsin?"*, *"Seni kim yaptı?"*) | |
| | | `output` | `string` | A template response using `{{model_name}}` and `{{team_name}}` placeholders | |
| |
|
| | ### Example Rows |
| |
|
| | | instruction | output | |
| | |---|---| |
| | | `Seni kim yaptı?` | `Ben {{team_name}} ekibi tarafından yapıldım.` | |
| | | `Sen kimsin?` | `Ben {{model_name}}, {{team_name}} tarafından geliştirilmiş bir yapay zeka asistanıyım.` | |
| | | `ChatGPT misin?` | `Hayır, ben {{model_name}}. {{team_name}} tarafından eğitilmiş bir asistanım.` | |
| | | `OpenAI'dan mısın?` | `Hayır, ben {{team_name}} tarafından bağımsız olarak geliştirildim.` | |
| | | `Adın ne?` | `Ben {{model_name}}, yapay zeka asistanınızım.` | |
| |
|
| | --- |
| |
|
| | ## Intended Uses |
| |
|
| | ### Primary Use |
| |
|
| | Fine-tuning or instruction-tuning Turkish LLMs to: |
| | - Respond to identity queries (name, creator, affiliation) in Turkish |
| | - Correctly deny association with third-party AI providers |
| | - Use a consistent, customizable branded persona via the `{{model_name}}` and `{{team_name}}` placeholders |
| |
|
| | ### Out-of-Scope Use |
| |
|
| | - This dataset covers **only** identity-related queries; it is not suitable as a standalone fine-tuning corpus for general conversational ability |
| | - The placeholder format requires preprocessing before use in most training pipelines |
| |
|
| | --- |
| |
|
| | ## Dataset Creation |
| |
|
| | ### Covered Question Categories |
| |
|
| | The dataset covers the following identity query themes: |
| |
|
| | - **Creator / origin** — *"Seni kim yaptı?"*, *"Nereden geliyorsun?"* |
| | - **Name / model identity** — *"Adın ne?"*, *"Model adını söyler misin?"* |
| | - **Brand denial** — *"ChatGPT misin?"*, *"Sen Claude musun?"*, *"Google tarafından mı oluşturuldun?"* |
| | - **Greetings with identity** — *"Merhaba"*, *"Selam"* → model introduces itself |
| | - **Paraphrastic variants** — Diverse rephrasings of the same intents to improve robustness |
| |
|
| | ### Template Placeholders |
| |
|
| | All outputs use two placeholders that must be filled before training: |
| |
|
| | | Placeholder | Description | |
| | |---|---| |
| | | `{{model_name}}` | The name of the deployed model | |
| | | `{{team_name}}` | The name of the developing team or organization | |
| |
|
| | **Example preprocessing (Python):** |
| | ```python |
| | def fill_template(example, model_name, team_name): |
| | example["output"] = ( |
| | example["output"] |
| | .replace("{{model_name}}", model_name) |
| | .replace("{{team_name}}", team_name) |
| | ) |
| | return example |
| | |
| | dataset = dataset.map(lambda x: fill_template(x, "Magibu-11b-v0.8", "magibu")) |
| | ``` |
| |
|
| | --- |
| |
|
| | ## Usage |
| |
|
| | ### With 🤗 Datasets |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("aliarda/TurkishIdentityMini") |
| | print(dataset["train"][0]) |
| | # {'instruction': 'Seni kim yaptı?', 'output': 'Ben {{team_name}} ekibi tarafından yapıldım.'} |
| | ``` |
| |
|
| | ### With pandas |
| |
|
| | ```python |
| | import pandas as pd |
| | |
| | df = pd.read_parquet("hf://datasets/aliarda/TurkishIdentityMini/data/train-*.parquet") |
| | print(df.head()) |
| | ``` |
| |
|
| | --- |
| |
|
| | ## Acknowledgements |
| |
|
| | 80 rows in this dataset were sourced from [`sts07142/llm-name-identity`](https://huggingface.co/datasets/sts07142/llm-name-identity) and translated into Turkish using AI-assisted translation. |
| |
|
| | --- |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset in your research, please cite it as: |
| |
|
| | ```bibtex |
| | @dataset{aliarda_turkishidentitymini, |
| | author = {Ali Arda Fincan}, |
| | title = {TurkishIdentityMini}, |
| | year = {2026}, |
| | publisher = {Hugging Face}, |
| | url = {https://huggingface.co/datasets/aliarda/TurkishIdentityMini} |
| | } |
| | ``` |
| |
|