| license: cc-by-4.0 | |
| language: | |
| - hu | |
| # Reddit Dataset (Semantic Chunks) | |
| Hungarian Reddit conversations dataset preprocessed with semantic chunking. | |
| ## Stats | |
| | | | | |
| |---|---| | |
| | **Rows** | 1,066,356 | | |
| | **Tokens** | 42,313,152 | | |
| | **Tokenizer** | `magyar-nlp-szine-java/exotic_modernbert_128k_tokenizer_modified` | | |
| ## Columns | |
| - `text` - Chunked text content | |
| - `token_count` - Token count per chunk | |
| - `source_id` - Original source row index | |
| - `chunk_id` - Unique chunk identifier | |
| - `subreddit` - Source subreddit | |
| - `type` - Submission or comment | |