Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- fr
|
| 4 |
+
tags:
|
| 5 |
+
- france
|
| 6 |
+
- legislation
|
| 7 |
+
- law
|
| 8 |
+
- loi
|
| 9 |
+
- codes
|
| 10 |
+
- embeddings
|
| 11 |
+
- open-data
|
| 12 |
+
- government
|
| 13 |
+
- legifrance
|
| 14 |
+
pretty_name: French Consolidated Legislation Dataset (LEGI)
|
| 15 |
+
size_categories:
|
| 16 |
+
- 100K<n<1M
|
| 17 |
+
license: etalab-2.0
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# 🇫🇷 French Consolidated Legislation Dataset (LEGI)
|
| 21 |
+
|
| 22 |
+
This dataset contains a **semantic-ready and embedded version** of the French **full consolidated text of national legislation and regulations** as published in the official **LEGI** database by Légifrance.
|
| 23 |
+
The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/LEGI) and is also published on [data.gouv.fr](https://www.data.gouv.fr/datasets/legi-codes-lois-et-reglements-consolides/).
|
| 24 |
+
|
| 25 |
+
The original and full **LEGI** dataset includes:
|
| 26 |
+
- All **laws, codes, decrees, circulars, deliberations, decree-laws, ordinances etc...** since 1945.
|
| 27 |
+
- A selection of consolidated **ministerial orders (arrêtés)**.
|
| 28 |
+
- All **73 official codes in force**, and **29 repealed codes**.
|
| 29 |
+
- And more ...
|
| 30 |
+
|
| 31 |
+
In this version, only versions of articles that are currently **in force** (`VIGUEUR`) or **abrogated deferred** (`ABROGE_DIFF`) are included.
|
| 32 |
+
|
| 33 |
+
Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example.
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## 🗂️ Dataset Contents
|
| 38 |
+
|
| 39 |
+
The dataset is provided in **Parquet format** and includes the following columns:
|
| 40 |
+
|
| 41 |
+
| Column Name | Type | Description |
|
| 42 |
+
|---------------------|----------|-----------------------------------------------------------------------------|
|
| 43 |
+
| `chunk_id` | `str` | Unique chunk identifier. |
|
| 44 |
+
| `cid` | `str` | LEGI article source identifier. |
|
| 45 |
+
| `chunk_number` | `int` | Index of the chunk within the article. |
|
| 46 |
+
| `nature` | `str` | Nature of the text (e.g., `Article`). |
|
| 47 |
+
| `category` | `str` | Type of legislative document (e.g., `LOI`, `DECRET`, `ARRETE`, etc.). |
|
| 48 |
+
| `ministry` | `str` | Ministry responsible for the publication, if applicable. |
|
| 49 |
+
| `status` | `str` | Status of the article: `VIGUEUR` (in force) or `ABROGE_DIFF` (abrogated deferred).|
|
| 50 |
+
| `title` | `str` | Short title of the legislative document. |
|
| 51 |
+
| `full_title` | `str` | Full formal title of the text. |
|
| 52 |
+
| `subtitles` | `str` | Subtitle(s) indicating article grouping or section. |
|
| 53 |
+
| `number` | `str` | Article or section number. |
|
| 54 |
+
| `start_date` | `str` | Date the article entered into force (format: YYYY-MM-DD). |
|
| 55 |
+
| `end_date` | `str` | End of validity date (or `2999-01-01` if still in force). |
|
| 56 |
+
| `nota` | `str` | Additional notes, if present. |
|
| 57 |
+
| `text` | `str` | Textual content chunk of the article (or subsection). |
|
| 58 |
+
| `chunk_text` | `str` | Formatted full chunk including `full_title`, `number`, formatted `subtitles` and `text`.|
|
| 59 |
+
| `embeddings_bge-m3` | `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON array string. |
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
## 🛠️ Data Processing Methodology
|
| 64 |
+
|
| 65 |
+
### 📥 1. Field Extraction
|
| 66 |
+
The following fields were extracted and/or transformed from the original source:
|
| 67 |
+
|
| 68 |
+
- **Basic fields**:
|
| 69 |
+
- `cid`, `nature`, `category`, ,`ministry`,`status`, `title`, `full_title`, `subtitles`, `number`, and `start_date`, `end_date` are extracted directly from the metadata of each article.
|
| 70 |
+
|
| 71 |
+
- **Generated fields**:
|
| 72 |
+
- `chunk_id`: a generated unique identifier combining the `cid` and `chunk_number`.
|
| 73 |
+
- `chunk_number`: index of the chunk from the original decision.
|
| 74 |
+
|
| 75 |
+
- **Textual fields**:
|
| 76 |
+
- `nota`: additional notes, also directly extracted from metadatas.
|
| 77 |
+
- `text`: chunk of the main text content.
|
| 78 |
+
- `chunk_text`: generated by concatenating `full_title`, article `number`, formatted `subtitles` and `text`.
|
| 79 |
+
|
| 80 |
+
### ✂️ 2. Generation of 'chunk_text'
|
| 81 |
+
|
| 82 |
+
Each article is naturally isolated and has its own `cid` value. But in case the article is too long, we have to split it in several chunks.
|
| 83 |
+
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
|
| 84 |
+
|
| 85 |
+
- `chunk_size` = 5000
|
| 86 |
+
- `chunk_overlap` = 250
|
| 87 |
+
- `length_function` = len
|
| 88 |
+
|
| 89 |
+
For each chunk (`text`), a `chunk_text` is constructed as follows:
|
| 90 |
+
|
| 91 |
+
``` text
|
| 92 |
+
"{full_title} - Article {number}
|
| 93 |
+
{formatted subtitles}
|
| 94 |
+
{text}"
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### 🧠 3. Embeddings Generation
|
| 98 |
+
|
| 99 |
+
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
|
| 100 |
+
The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
|
| 101 |
+
|
| 102 |
+
## 📌 Embedding Use Notice
|
| 103 |
+
|
| 104 |
+
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
|
| 105 |
+
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
|
| 106 |
+
|
| 107 |
+
```python
|
| 108 |
+
import pandas as pd
|
| 109 |
+
import json
|
| 110 |
+
|
| 111 |
+
df = pd.read_parquet("legi-latest.parquet")
|
| 112 |
+
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
## 📚 Source & License
|
| 116 |
+
|
| 117 |
+
## 🔗 Source :
|
| 118 |
+
- [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/LEGI)
|
| 119 |
+
- [Data.gouv.fr : LEGI: Codes, lois et règlements consolidés ](https://www.data.gouv.fr/datasets/legi-codes-lois-et-reglements-consolides/)
|
| 120 |
+
|
| 121 |
+
## 📄 Licence :
|
| 122 |
+
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.
|