|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
- he |
|
|
pretty_name: d |
|
|
--- |
|
|
|
|
|
# π MCGPT-1: Mega General Dataset (Pre-training Phase) |
|
|
|
|
|
This dataset is the backbone of the **MCGPT-1** model, developed by **TopAI-IL**. It contains a massive collection of Minecraft-related knowledge, technical data, and general language patterns. |
|
|
|
|
|
## π Dataset Statistics |
|
|
* **Total Tokens:** 188,508,365 πͺ |
|
|
* **Total Lines:** 96,852 π |
|
|
* **File Size:** 638.25 MB π |
|
|
* **Focus:** Minecraft Mechanics, Technical Data, and General Knowledge. |
|
|
|
|
|
## π― Purpose |
|
|
This dataset is designed for the **Pre-training** phase of Large Language Models (LLMs). It provides the model with the necessary "world knowledge" before moving into specific instruction tuning. |
|
|
|
|
|
## π οΈ Data Quality |
|
|
The data has been cleaned and formatted to ensure high-quality learning for small-to-medium scale models (specifically optimized for models around 16M - 100M parameters). |
|
|
|
|
|
## π Usage |
|
|
This is part 1 of a multi-dataset series. For best results with MCGPT-1 architecture, combine this with the: |
|
|
1. **Synthetic CoT Dataset** (Identity & Reasoning) |
|
|
2. **Reddit Interaction Dataset** (Human-like Conversational Data) |
|
|
|
|
|
--- |
|
|
**Maintained by:** Χ¨ΧΧΧΧ (Raziel) @ TopAI-IL |
|
|
**Year:** 2026 |