|
|
--- |
|
|
language: |
|
|
- fr |
|
|
tags: |
|
|
- france |
|
|
- legislation |
|
|
- law |
|
|
- loi |
|
|
- codes |
|
|
- embeddings |
|
|
- open-data |
|
|
- government |
|
|
- legifrance |
|
|
pretty_name: French Consolidated Legislation Dataset (LEGI) |
|
|
size_categories: |
|
|
- 1M<n<10M |
|
|
license: etalab-2.0 |
|
|
configs: |
|
|
- config_name: latest |
|
|
data_files: "data/legi-latest/*/*.parquet" |
|
|
default: true |
|
|
--- |
|
|
|
|
|
# 🇫🇷 French Consolidated Legislation Dataset (LEGI) |
|
|
|
|
|
This dataset contains a **semantic-ready and embedded version** of the French **full consolidated text of national legislation and regulations** as published in the official **LEGI** database by Légifrance. |
|
|
The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/LEGI) and is also published on [data.gouv.fr](https://www.data.gouv.fr/datasets/legi-codes-lois-et-reglements-consolides/). |
|
|
|
|
|
The original and full **LEGI** dataset includes: |
|
|
- All **laws, codes, decrees, circulars, deliberations, decree-laws, ordinances etc...** since 1945. |
|
|
- A selection of consolidated **ministerial orders (arrêtés)**. |
|
|
- All **official codes in force**, and **repealed codes**. |
|
|
- And more ... |
|
|
|
|
|
In this version, only the following articles are included : |
|
|
- **In force** (`VIGUEUR`) |
|
|
- **Modified** (`MODIFIE`) : content that has been modified, so not in force anymore |
|
|
- **Abrogated deferred** (`ABROGE_DIFF`) : content still valid until the end date indicated. |
|
|
|
|
|
Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example. |
|
|
|
|
|
The dataset is splitted in subfolders by 'category' and 'code', so that the dataset is more usable for specifics use cases. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🗂️ Dataset Contents |
|
|
|
|
|
The dataset is provided in **Parquet format** and includes the following columns: |
|
|
|
|
|
| Column Name | Type | Description | |
|
|
|---------------------|----------|-----------------------------------------------------------------------------| |
|
|
| `chunk_id` | `str` | Unique chunk identifier. | |
|
|
| `doc_id` | `str` | LEGI article source identifier. | |
|
|
| `chunk_index` | `int` | Index of the chunk within its original document. Starting from 1. | |
|
|
| `chunk_xxh64` | `str` | XXH64 hash of the `chunk_text` value. | |
|
|
| `nature` | `str` | Nature of the text (e.g., `Article`). | |
|
|
| `category` | `str` | Type of legislative document (e.g., `LOI`, `DECRET`, `ARRETE`, etc.). | |
|
|
| `ministry` | `str` | Ministry responsible for the publication, if applicable. | |
|
|
| `status` | `str` | Status of the article: `VIGUEUR` (in force), `MODIFIE` (modified) or `ABROGE_DIFF` (abrogated deferred).| |
|
|
| `title` | `str` | Short title of the legislative document. | |
|
|
| `full_title` | `str` | Full formal title of the text. | |
|
|
| `subtitles` | `str` | Subtitle(s) indicating article grouping or section. | |
|
|
| `number` | `str` | Article or section number. | |
|
|
| `start_date` | `str` | Date the article entered into force (format: YYYY-MM-DD). | |
|
|
| `end_date` | `str` | End of validity date (or `2999-01-01` if still in force). | |
|
|
| `nota` | `str` | Additional notes, if present. | |
|
|
| `links` | `list[dict]` | List of legal links associated with the text with metadata. | |
|
|
| `text` | `str` | Textual content chunk of the article (or subsection). | |
|
|
| `chunk_text` | `str` | Formatted full chunk including `full_title`, `number`, formatted `subtitles` and `text`.| |
|
|
| `embeddings_bge-m3` | `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON array string. | |
|
|
|
|
|
--- |
|
|
|
|
|
## 🛠️ Data Processing Methodology |
|
|
|
|
|
### 📥 1. Field Extraction |
|
|
The following fields were extracted and/or transformed from the original source: |
|
|
|
|
|
- **Basic fields**: |
|
|
- `doc_id`, `nature`, `category`, ,`ministry`,`status`, `title`, `full_title`, `subtitles`, `number`, and `start_date`, `end_date` are extracted directly from the metadata of each article. |
|
|
|
|
|
- **Generated fields**: |
|
|
- `chunk_id`: a generated unique identifier of the chunk, combining the `doc_id` and `chunk_index`. |
|
|
- `chunk_index`: is the index of the chunk of a same article. Each article has an unique `doc_id`. |
|
|
- `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another. |
|
|
|
|
|
- **Textual fields**: |
|
|
- `nota`: additional notes, also directly extracted from metadatas. |
|
|
- `text`: chunk of the main text content. |
|
|
- `chunk_text`: generated by concatenating `full_title`, article `number`, formatted `subtitles` and `text`. |
|
|
|
|
|
- **List-based fields**: |
|
|
- `links`: list of legal relationships associated with the article, extracted from the LEGI metadata. |
|
|
Each element is a JSON object describing a link to another legal document (source or target), with fields such as: |
|
|
- `title`: official title of the related document. |
|
|
- `doc_id` / `text_doc_id`: LEGI or JORF identifier of the related document. |
|
|
- `number` / `text_number`: article or act number when applicable. |
|
|
- `category`: type of the related document (e.g. `LOI`, `DECRET`, etc.). |
|
|
- `link_type`: nature of the relationship (e.g. `TXT_SOURCE`, `CITATION`). |
|
|
- `link_direction`: direction of the relation, indicates whether the element carrying the link corresponds to the source or target of the link. |
|
|
(e.g. `source` when the current article cites another text, `cible` when it is cited by another text). |
|
|
- `nor`: NOR identifier of the related act, when available. |
|
|
- `text_signature_date`: signature date of the related text (YYYY-MM-DD), when available. |
|
|
|
|
|
### ✂️ 2. Generation of 'chunk_text' |
|
|
|
|
|
Each article is naturally isolated and has its own `cid` value. But in case the article is too long, we have to split it in several chunks. |
|
|
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are : |
|
|
|
|
|
- `chunk_size` = 5000 |
|
|
- `chunk_overlap` = 250 |
|
|
- `length_function` = len |
|
|
|
|
|
For each chunk (`text`), a `chunk_text` is constructed as follows: |
|
|
|
|
|
``` text |
|
|
"{full_title} - Article {number} |
|
|
{formatted subtitles} |
|
|
{text}" |
|
|
``` |
|
|
|
|
|
### 🧠 3. Embeddings Generation |
|
|
|
|
|
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. |
|
|
The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array. |
|
|
|
|
|
## 📌 Embedding Use Notice |
|
|
|
|
|
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`). |
|
|
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library: |
|
|
|
|
|
```python |
|
|
import pandas as pd |
|
|
import json |
|
|
from datasets import load_dataset |
|
|
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow |
|
|
|
|
|
dataset = load_dataset("AgentPublic/legi") # Loading the full dataset |
|
|
df = pd.DataFrame(dataset['train']) |
|
|
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads) |
|
|
``` |
|
|
|
|
|
Otherwise, if you have already downloaded some parquet files from the `data/legi-latest/` folder : |
|
|
```python |
|
|
import pandas as pd |
|
|
import json |
|
|
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow |
|
|
|
|
|
df = pd.read_parquet(path="legi-latest/") # Assuming that all parquet files are located into this folder |
|
|
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads) |
|
|
``` |
|
|
|
|
|
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice. |
|
|
|
|
|
## 🐱 GitHub repository : |
|
|
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech) |
|
|
|
|
|
## 📚 Source & License |
|
|
|
|
|
### 🔗 Source : |
|
|
- [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/LEGI) |
|
|
- [Data.gouv.fr : LEGI: Codes, lois et règlements consolidés ](https://www.data.gouv.fr/datasets/legi-codes-lois-et-reglements-consolides/) |
|
|
|
|
|
### 📄 Licence : |
|
|
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license. |