Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Size:
10B - 100B
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -62,6 +62,10 @@ configs:
|
|
| 62 |
data_files:
|
| 63 |
- path: data/natural/it-en/*/*.parquet
|
| 64 |
split: train
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
- config_name: code
|
| 66 |
data_files:
|
| 67 |
- path: data/code/*/*/*parquet
|
|
@@ -403,3 +407,97 @@ configs:
|
|
| 403 |
- path: data/natural/*/YouTube/*.parquet
|
| 404 |
split: train
|
| 405 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
data_files:
|
| 63 |
- path: data/natural/it-en/*/*.parquet
|
| 64 |
split: train
|
| 65 |
+
- config_name: natural
|
| 66 |
+
data_files:
|
| 67 |
+
- path: data/natural/*/*/*.parquet
|
| 68 |
+
split: train
|
| 69 |
- config_name: code
|
| 70 |
data_files:
|
| 71 |
- path: data/code/*/*/*parquet
|
|
|
|
| 407 |
- path: data/natural/*/YouTube/*.parquet
|
| 408 |
split: train
|
| 409 |
---
|
| 410 |
+
|
| 411 |
+
# Dataset Card
|
| 412 |
+
|
| 413 |
+
The Lucie Training Dataset is a curated collection of text data
|
| 414 |
+
in English, French, German, Spanish and Italian,
|
| 415 |
+
from the web,
|
| 416 |
+
video subtitles,
|
| 417 |
+
collections of books, newspapers, monographies, and magazines processed by Optical Character Recognition (OCR),
|
| 418 |
+
as well as collections of files in diverse programming languages.
|
| 419 |
+
|
| 420 |
+
It was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
|
| 421 |
+
a foundation LLM with strong capabilities in French and English.
|
| 422 |
+
|
| 423 |
+
## Dataset Description
|
| 424 |
+
|
| 425 |
+
This dataset was made to provide an extensive and diverse dataset for training Large Language Models (LLM),
|
| 426 |
+
with the following motivations in mind:
|
| 427 |
+
* Data mix:
|
| 428 |
+
* French is as well represented as English
|
| 429 |
+
(Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
|
| 430 |
+
* German, Spanish and Italian are also represented to some extend,
|
| 431 |
+
* Code is also included to boost the reasoning capabilities of LLM.
|
| 432 |
+
* Data filtering and deduplication:
|
| 433 |
+
* The dataset is cleaned low-quality data
|
| 434 |
+
* The dataset is cleaned from duplicates to some extend, following best practices.
|
| 435 |
+
* Ethics:
|
| 436 |
+
* A special care was taken to respect copyright laws and the privacy of individuals.
|
| 437 |
+
All books, newspapers, monographies, and magazines are in the public domain
|
| 438 |
+
(which depends on the author's death date, and the country of publication).
|
| 439 |
+
* There is no data from the web for which robots.txt files forbid crawling.
|
| 440 |
+
|
| 441 |
+
### Dataset Structure
|
| 442 |
+
|
| 443 |
+
The corpus contains the following information for each text sample:
|
| 444 |
+
* `text`: the text sample itself.
|
| 445 |
+
* `source`: an identifier for the source(s) of the text sample (`Wikipedia`, `RedPajama`, `Gutenberg`, …).
|
| 446 |
+
The list of all sources is described in this document.
|
| 447 |
+
* `id`: an identifier that is unique among the source.
|
| 448 |
+
* `language`: the language of the text sample, which can be:
|
| 449 |
+
* the ISO 639-1 code of a natural language: `en`, `fr`, `de`, `es`, or `it`;
|
| 450 |
+
* the common name prefixed by "`code:`" of a programming language: `code:python`, `code:c++`, …; or
|
| 451 |
+
* a list of ISO 639-1 codes separated by commas, if the text sample is multilingual: `fr,en`, `de,fr`, `es,en`, `it,en`
|
| 452 |
+
(or in the opposite order if the languages appear in the opposite order in the text).
|
| 453 |
+
* `url` (optional): the URL of the original text sample on the web, if available.
|
| 454 |
+
* `title` (optional): the title of the original text sample, if available.
|
| 455 |
+
* `author` (optional): the author of the original text sample, if available.
|
| 456 |
+
Usually the author name in plain text, except for `Gutenberg` where it is the JSON serialized object of the author metadata.
|
| 457 |
+
* `date` (optional): the publication date of the original text sample, if available. The text format of the source depends on the source.
|
| 458 |
+
* `quality_signals` (optional): a list of quality signals about the text sample (that could be used for further filtering or sample weighting).
|
| 459 |
+
It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
|
| 460 |
+
This field is always a JSON serialized object.
|
| 461 |
+
* `extra` (optional): JSON serialized extra information about the text sample.
|
| 462 |
+
This can include metadata about the source subset, the rights, etc.
|
| 463 |
+
|
| 464 |
+
Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata_examples.json).
|
| 465 |
+
|
| 466 |
+
### Example use in python
|
| 467 |
+
|
| 468 |
+
Load the dataset using the `datasets` library:
|
| 469 |
+
```python
|
| 470 |
+
from datasets import load_dataset
|
| 471 |
+
|
| 472 |
+
kwargs = {"split": "train", "streaming": True}
|
| 473 |
+
|
| 474 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
|
| 475 |
+
```
|
| 476 |
+
|
| 477 |
+
Several configurations are available to select a language, a source, or both, illustrated in the following examples.
|
| 478 |
+
|
| 479 |
+
Only load data in French:
|
| 480 |
+
```python
|
| 481 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
|
| 482 |
+
```
|
| 483 |
+
Load data that is aligned in French and English:
|
| 484 |
+
```python
|
| 485 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
|
| 486 |
+
```
|
| 487 |
+
Only load data corresponding to programming languages:
|
| 488 |
+
```python
|
| 489 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
|
| 490 |
+
```
|
| 491 |
+
Only load data in python:
|
| 492 |
+
```python
|
| 493 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
|
| 494 |
+
```
|
| 495 |
+
Only load data from Wikipedia:
|
| 496 |
+
```python
|
| 497 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
|
| 498 |
+
```
|
| 499 |
+
Only load data from Wikipedia in French:
|
| 500 |
+
```python
|
| 501 |
+
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
|
| 502 |
+
```
|
| 503 |
+
|