The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Telegram Dialogues Dataset (Pre-processed for Base LLMs)
This dataset contains parsed and heavily cleaned Telegram chat histories, specifically formatted for Causal Language Modeling (CLM) fine-tuning of Base LLMs. It is designed to be used without Chat Templates, allowing the model to learn the natural flow of human conversation.
π Dataset Statistics
| Metric | Count |
|---|---|
| Total Samples (Chunks) | 2,806,820 |
| Total Tokens | 426,985,690 |
| Total Words (Approx.) | 193,555,002 |
π Data Processing Pipeline
This dataset was generated using an automated pipeline designed to extract high-quality human dialogues while aggressively filtering out spam, bots, and PII.
1. Text Cleaning & Normalization
- Bot & Service Messages: Automatically removed audio-transcription bot messages (e.g., Voicy, Wit.ai) and standard Telegram bot commands (
/help,/start, etc.). - PII & Links: Stripped all URLs (
http/https,www), email addresses, and phone numbers. - System Messages: Removed Telegram system notifications (e.g., "User joined the group").
- Emojis: Removed emojis from the text.
2. Smart Bot & Spam Detection
To ensure conversational quality, the pipeline identifies and drops automated accounts using a mathematical heuristic:
- Target Group: Analyzed the top
10.0%most active users. - Type-Token Ratio (TTR): Users with a vocabulary richness (TTR) below
0.15were flagged as potential bots. - N-gram Analysis: Confirmed bots by detecting highly repetitive
5-word phrases (spam templates).
3. Dialogue Splitting & Chunking
- Session Splitting: Messages from the same user were grouped into continuous dialogues. If the time gap between two consecutive messages exceeded
60minutes, the dialogue was split into a new session. - Sliding Window Tokenization: To maintain context without exceeding context limits, dialogues were tokenized using
Qwen/Qwen3-0.6B-Base. - Chunking Strategy: Dialogues longer than
512tokens were split using a sliding window with an overlap of64tokens. This ensures the model always has preceding context when generating text at chunk boundaries.
βοΈ Generation Hyperparameters
| Parameter | Value |
|---|---|
input_type |
json |
min_messages |
50 |
max_gap_minutes |
60 |
msg_separator |
" " |
tokenizer |
Qwen/Qwen3-0.6B-Base |
chunk_size |
512 |
chunk_overlap |
64 |
whitelist |
None |
π Usage with Unsloth
The dataset is saved in JSONL format, where each line is a JSON object with a "text" key.
CRITICAL: Because the text is already perfectly chunked with a sliding window overlap, you MUST disable packing during training to prevent the trainer from shuffling and concatenating the chunks, which would destroy the carefully engineered context overlap.
from datasets import load_dataset
from trl import SFTTrainer
dataset = load_dataset("qzeaq/telegram-dialogues", split="train")
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = 512,
packing = False, # MUST BE FALSE!
# ... other arguments ...
)
- Downloads last month
- 74