--- license: apache-2.0 language: - en - he pretty_name: d --- # 🌍 MCGPT-1: Mega General Dataset (Pre-training Phase) This dataset is the backbone of the **MCGPT-1** model, developed by **TopAI-IL**. It contains a massive collection of Minecraft-related knowledge, technical data, and general language patterns. ## 📊 Dataset Statistics * **Total Tokens:** 188,508,365 🪙 * **Total Lines:** 96,852 📝 * **File Size:** 638.25 MB 📂 * **Focus:** Minecraft Mechanics, Technical Data, and General Knowledge. ## 🎯 Purpose This dataset is designed for the **Pre-training** phase of Large Language Models (LLMs). It provides the model with the necessary "world knowledge" before moving into specific instruction tuning. ## 🛠️ Data Quality The data has been cleaned and formatted to ensure high-quality learning for small-to-medium scale models (specifically optimized for models around 16M - 100M parameters). ## 🚀 Usage This is part 1 of a multi-dataset series. For best results with MCGPT-1 architecture, combine this with the: 1. **Synthetic CoT Dataset** (Identity & Reasoning) 2. **Reddit Interaction Dataset** (Human-like Conversational Data) --- **Maintained by:** רזיאל (Raziel) @ TopAI-IL **Year:** 2026