metadata
license: cc-by-4.0
language:
- hu
Reddit Dataset (Semantic Chunks)
Hungarian Reddit conversations dataset preprocessed with semantic chunking.
Stats
| Rows | 1,066,356 |
| Tokens | 42,313,152 |
| Tokenizer | magyar-nlp-szine-java/exotic_modernbert_128k_tokenizer_modified |
Columns
text- Chunked text contenttoken_count- Token count per chunksource_id- Original source row indexchunk_id- Unique chunk identifiersubreddit- Source subreddittype- Submission or comment