File size: 553 Bytes
1ecfda0 d9ba748 1ecfda0 d9ba748 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: cc-by-4.0
language:
- hu
---
# Reddit Dataset (Semantic Chunks)
Hungarian Reddit conversations dataset preprocessed with semantic chunking.
## Stats
| | |
|---|---|
| **Rows** | 1,066,356 |
| **Tokens** | 42,313,152 |
| **Tokenizer** | `magyar-nlp-szine-java/exotic_modernbert_128k_tokenizer_modified` |
## Columns
- `text` - Chunked text content
- `token_count` - Token count per chunk
- `source_id` - Original source row index
- `chunk_id` - Unique chunk identifier
- `subreddit` - Source subreddit
- `type` - Submission or comment
|