| license: mit | |
| task_categories: | |
| - text-retrieval | |
| - sentence-similarity | |
| language: | |
| - en | |
| tags: | |
| - rag | |
| - chunking | |
| source_datasets: | |
| - The Pile | |
| # FreeChunk Corpus | |
| This dataset is derived from **The Pile** and is used for evaluating and training chunking models, specifically for the FreeChunk framework. | |
| ## Dataset Structure | |
| The dataset consists of documents split into sentences. | |
| ### Features | |
| - `uuid`: Unique identifier for the document. | |
| - `sentences`: List of sentences in the document. | |
| - `source`: Source of the document (e.g., from The Pile). | |
| - `original_token_count`: Number of tokens in the original document. | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("XiaSheng/FreeChunk-corpus") | |
| ``` | |