File size: 8,590 Bytes
99fcf92 2eb21b3 99fcf92 2eb21b3 99fcf92 2eb21b3 99fcf92 2eb21b3 99fcf92 e7f0df3 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 59461ed 99fcf92 fbb1ac5 99fcf92 59461ed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 |
---
license: mit
dataset_info:
features:
- name: id
dtype: int64
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: token_count
dtype: int64
- name: turns_count
dtype: int64
- name: markdown_conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: topic
dtype: string
- name: safety_flags
sequence: string
splits:
- name: train
num_bytes: 374660
num_examples: 201
download_size: 172981
dataset_size: 374660
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# BoDmagh dataset 🧠
The BoDmagh dataset is a Supervised Fine-Tuning (SFT) dataset for the Darija language. I created it manually, ensuring high quality. The dataset is in JSON format and includes conversations between a user and an assistant.
I update the dataset daily, so make sure to check the repository regularly.
## Time spent
Creating this dataset has been a labor of love. I’ve dedicated approximately **14 hours and 40 minutes** so far, manually curating each entry to ensure high quality. I continue to add new entries daily, so this number will keep growing.
## Content
### 1. Structure
The dataset is a JSON file containing a list of objects. Each object represents a single conversation entry and includes several metadata fields along with the conversation itself.
Here are the main fields within each conversation object:
* **`id`**: A unique integer identifier for the conversation.
* **`conversation`**: A list of message objects representing the turns in the conversation. Each message object has:
* `role`: Either `user` or `assistant`.
* `content`: The text content of the message turn (plain text).
* **`token_count`**: The total number of tokens in the `conversation` field, calculated using the [darija tokenizer](https://github.com/ImadSaddik/DarijaTokenizers).
* **`turns_count`**: The total number of turns (messages) in the `conversation`.
* **`markdown_conversation`**: A list of message objects similar to `conversation`, but the `content` may include Markdown formatting (e.g., bolding, lists) where applicable.
* **`source`**: Indicates how the conversation was generated (e.g., "Manually generated").
* **`topic`**: A label indicating the primary topic of the conversation (e.g., "Science", "Travel", "Chit-chat/Games/Humor").
* **`safety_flags`**: A list of strings indicating any safety concerns identified in the conversation (e.g., `["harmful_content_request", "refusal"]`). An empty list `[]` means the conversation is considered safe.
*Note: The `system` message is not included, but you can add it if needed for your specific fine-tuning format.*
Here is an example of a single conversation object:
```json
{
"id": 74,
"conversation": [
{
"role": "user",
"content": "شحال بعيدة علينا الشمس؟"
},
{
"role": "assistant",
"content": "الشمس بعيدة علينا ب 150 مليون كيلومتر تقريبا."
},
{
"role": "user",
"content": "ويلي ويلي شحال بعيداااا."
},
{
"role": "assistant",
"content": "واييه راه هاد الكون كبير بزاف صعيب الواحد يتخيل الكبورية ديالو."
},
{
"role": "user",
"content": "شناهية سنة ضوئية؟"
},
{
"role": "assistant",
"content": "هي المسافة اللي الضو كيضربها فعام."
},
{
"role": "user",
"content": "شحال فيها من كيلومتر؟"
},
{
"role": "assistant",
"content": "سنة ضوئية فيها 9.461 تريليون كيلومتر."
},
{
"role": "user",
"content": "والله اتا بزاف هاد الكون."
},
{
"role": "assistant",
"content": "الى عندك شي سؤال خور غولهالي."
}
],
"token_count": 97,
"turns_count": 10,
"markdown_conversation": [
{
"role": "user",
"content": "شحال بعيدة علينا الشمس؟"
},
{
"role": "assistant",
"content": "الشمس بعيدة علينا ب **150 مليون كيلومتر** تقريبا."
},
{
"role": "user",
"content": "ويلي ويلي شحال بعيداااا."
},
{
"role": "assistant",
"content": "واييه راه هاد الكون كبير بزاف صعيب الواحد يتخيل الكبورية ديالو."
},
{
"role": "user",
"content": "شناهية سنة ضوئية؟"
},
{
"role": "assistant",
"content": "هي المسافة اللي الضو كيضربها فعام."
},
{
"role": "user",
"content": "شحال فيها من كيلومتر؟"
},
{
"role": "assistant",
"content": "سنة ضوئية فيها **9.461 تريليون كيلومتر**."
},
{
"role": "user",
"content": "والله اتا بزاف هاد الكون."
},
{
"role": "assistant",
"content": "الى عندك شي سؤال خور غولهالي."
}
],
"source": "Manually generated",
"topic": "Science",
"safety_flags": []
}
```
### 2. Assistant's name
Sometimes the assistant's name is included. The assistant's name is `بودماغ`. If you find this word, you can replace it with any other name. Here's an example:
```json
[
{
"role": "user",
"content": "أهلا شنو سميتك؟"
},
{
"role": "assistant",
"content": "انا سميتي بودماغ، ساوبني عماد الصاديق باش نساعدك فاي حاجة اللي نقد نعاون فيها."
},
{
"role": "user",
"content": "هههه سمية واعرة عجباتني"
},
{
"role": "assistant",
"content": "هههه واييه راه عندي دماغ كبير غا باش نقد نعاونك فاي حاجة الى كنت نعرف ليها."
},
{
"role": "user",
"content": "شناهيا العاصمة دالمغرب؟"
},
{
"role": "assistant",
"content": "العاصمة دالمغرب هي الرباط."
}
]
```
### 3. Special tokens
This dataset doesn't use any special tokens. Since everyone uses different tokens, feel free to add them as needed.
## Contributing
As I mentioned, I'm manually creating this dataset to ensure high quality. If you'd like to help, follow these steps:
1. Fork the repository.
2. Create a new branch.
3. Add your new conversations to the `input_dataset.json` file. Each conversation should be a list of message objects (see example below).
4. Run the `enrich_dataset.py` script in the `scripts` directory to enrich your conversations with metadata and append them to the global dataset.
5. Create a pull request.
### Example: Adding to `input_dataset.json`
The `input_dataset.json` file should contain a list of conversations. Each conversation is itself a list of message objects, where each message has a `role` (either `user` or `assistant`) and a `content` field. For example:
```json
[
[
{"role": "user", "content": "أهلا شنو سميتك؟"},
{"role": "assistant", "content": "انا سميتي بودماغ، ساوبني عماد الصاديق باش نساعدك فاي حاجة اللي نقد نعاون فيها."}
],
[
{"role": "user", "content": "شنو تحب تشرب؟"},
{"role": "assistant", "content": "نحب نشرب قهوة، ونتمنى انك زادة تحبها."}
]
]
```
After adding your conversations, run:
```bash
python scripts/enrich_dataset.py
```
This will process your new conversations and add them (with metadata) to the main dataset.
## Wanna talk?
You can contact me through:
* **Discord:** Username: imad_saddik
* **LinkedIn:** [Connect with me](https://www.linkedin.com/in/imadsaddik/).
* **Email:** [simad3647@gmail.com](mailto:simad3647@gmail.com).
|