File size: 3,482 Bytes
bfc74d4 57f445d e9944f5 bfc74d4 a6f27d4 868e6bb a6f27d4 3c3e335 6b4efc1 bfc74d4 be4f372 868e6bb be4f372 a27b029 d745092 a27b029 bfc74d4 6f3271c 7cbca34 bfc74d4 9123a80 f0cf5c4 6f3271c 7cbca34 6f3271c 55327fd 6f3271c 58efe26 55327fd 7cbca34 55327fd 6f3271c f0cf5c4 9123a80 bfc74d4 c07dd7b bfc74d4 e1c98f4 fb195b9 9123a80 4bb7549 9bc4446 e4af839 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | ---
tags:
- datasets
- discord
- chatml
- conversation
- dialogue
- multi-turn
- single-turn
- fine-tuning
- reward-model
- llm-training
- chat-dataset
- open-source
- anonymized-data
- slang
- casual-dialogue
license: apache-2.0
language:
- en
pretty_name: Discord‑OpenMicae
size_categories:
- 100K<n<1M
---
<p align="center">
<img src="assets/OpenMicae.png" alt="OpenMicae">
</p>
> **Discord-OpenMicae** is a dataset of anonymized Discord conversations from late spring to late summer 2025 for training and evaluating conversational AI models in a ChatML-friendly format.
- **250k+ Single-Turn Exchanges (STX)** – standalone user → reply pairs
- **100k+ Multi-Turn Chains** – two-participant reply chains, variable length
---
<p align="center">
<a href="https://atlas.nomic.ai/data/mookiezi/discord-openmicae/map">
<img src="assets/OpenMicae-Map.png" alt="OpenMicae Map">
</a>
</p>
<p align="center">
<a href="https://atlas.nomic.ai/data/mookiezi/discord-openmicae/map"><strong>Nomic Atlas Map</strong></a>
</p>
---
## Features
- Human-only dialogues (no bots)
- Links, embeds, and commands removed
- Trading posts, code blocks, and LFG removed
- Two-author chains only
- Merged self-replies from the same author into a single message
- Cleaned and deduplicated for relevance
- Primarily English, with some other languages present
## Use
- Fine-tuning conversational models
- Training relevance/reward models
- Dialogue generation research
## Dataset
| Subset | Samples | Description |
|--------|-----------|---------------------------------------|
| STX | 260,670 | Single-turn prompt/response pairs |
| Chains | 101,480 | Multi-turn conversations (2 authors) |
## Text Statistics
| Metric | Value |
|-----------------------|------------:|
| Samples (count) | 362,150 |
| Min length (tokens) | 24 |
| Max length (tokens) | 106 |
| Mean length (tokens) | 61.96 |
| Median length (tokens)| 59 |
| Std dev (tokens) | 14.62 |
- **Total tokens:** 22,437,828 (using the [Hermes-3-Llama-3.2-3B tokenizer](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B))
- **Total characters:** 106,956,446
- **Total words:** 14,950,203
- **Assistant blocks:** 480,917
### Length Distribution (tokens)
| Bin (tokens) | Count |
|--------------|--------:|
| 31–38 | 19,953 |
| 39–46 | 21,765 |
| 47–54 | 76,180 |
| 55–62 | 99,760 |
| 63–70 | 60,461 |
| 71–78 | 36,277 |
| 79–86 | 21,161 |
| 87–94 | 14,873 |
| 95–102 | 9,614 |
| 103–110 | 2,721 |
## License
This project is licensed under the Apache License 2.0.
## How to cite:
```bibtex
@misc{discord-micae-hermes3b,
title = {Discord-OpenMicae},
author = {mookiezi},
year = {2025},
url={https://huggingface.co/datasets/mookiezi/Discord-OpenMicae}
}
```
## Related
- [mookiezi/Discord-Dialogues](https://huggingface.co/datasets/mookiezi/Discord-Dialogues)
- [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B)
- [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B)
## Disclaimer
All data was collected following Discord's Terms of Service.
[](https://20000.online/micae)
[](https://20000.online/openmicae)
[](https://20000.online/discord-dialogues) |