FaheemBEG commited on
Commit
b6848f6
·
verified ·
1 Parent(s): 877abe3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ tags:
5
+ - france
6
+ - legislation
7
+ - law
8
+ - embeddings
9
+ - open-data
10
+ - government
11
+ - parlement
12
+ pretty_name: French Legislative Dossiers Dataset (DOLE)
13
+ size_categories:
14
+ - 10K<n<100K
15
+ license: etalab-2.0
16
+ ---
17
+
18
+ # 🇫🇷 French Legislative Dossiers Dataset (DOLE)
19
+
20
+ This dataset provides a semantic-ready, chunked and embedded version of the **Dossiers Législatifs** ("DOLE") published by the French government. It includes all **laws promulgated since the XIIᵉ legislature (June 2002)**, **ordinances**, and **legislative proposals** under preparation.
21
+ The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/DOLE) and is also published on [data.gouv.fr](https://www.data.gouv.fr/datasets/dole-les-dossiers-legislatifs/).
22
+
23
+ Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example.
24
+
25
+ ---
26
+
27
+ ## 🗂️ Dataset Contents
28
+
29
+ The dataset is available in **Parquet format** and contains the following columns:
30
+
31
+ | Column Name | Type | Description |
32
+ |---------------------|------------------|-----------------------------------------------------------------------------|
33
+ | `chunk_id` | `str` | Unique identifier for each chunk. |
34
+ | `cid` | `str` | Legislative file identifier |
35
+ | `chunk_number` | `int` | Index of the chunk within its parent document. |
36
+ | `category` | `str` | Type of dossier (e.g., `LOI_PUBLIEE`, `PROJET_LOI`, etc.). |
37
+ | `content_type` | `str` | Nature of the content: `article`, `dossier_content`, or `explanatory_memorandum`. |
38
+ | `title` | `str` | Title summarizing the subject matter. |
39
+ | `number` | `str` | Internal document number. |
40
+ | `wording` | `str` | Libelle, Legislature reference (e.g., `XIVème législature`). |
41
+ | `creation_date` | `str` | Creation or publication date (YYYY-MM-DD). |
42
+ | `article_number` | `int` or `null` | Article number if applicable. |
43
+ | `article_title` | `str` or `null` | Optional title of the article. |
44
+ | `article_synthesis` | `str` or `null` | Optional synthesis of the article. |
45
+ | `text` | `str` or `null` | Text content of the explanatory_memorandum, article or file content (contenu du dossier) chunk.|
46
+ | `chunk_text` | `str` | Concatenated text (`title` + `article_text` or related content). |
47
+ | `embeddings_bge-m3` | `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON string.|
48
+
49
+ ---
50
+
51
+ ## 🛠️ Data Processing Methodology
52
+
53
+ ### 🧩 1. Content Extraction
54
+
55
+ Each **dossier législatif** was parsed, processed and standardized from his official XML structure.
56
+ Metadata, article blocks, and explanatory sections were normalized into a unified schema.
57
+ Specific rules applied per content type:
58
+
59
+ - `explanatory_memorandum`: Includes the explanatory's introduction only. All articles synthesis that are in the explanatory are split by their `article_number` and added to `article_synthesis`.
60
+ Article fields are `null`.
61
+ An explanatory memorandum (exposé des motifs) is an official text that accompanies a draft or proposed law.
62
+ It is used to explain the reasons why the law is being proposed, the context in which it is set, and the objectives pursued by the legislator.
63
+ - `dossier_content`: Includes dossier's textual content if the split by article didn't work.
64
+ The split may not work if there is no mention of article numbers in the dossier content or if the code was not adapted to a specific case in which the split wasn't possible.
65
+ Article metadata fields are `null`.
66
+ - `article`: Structured content, where `article_number` and `text` are always present. `article_title` and `article_synthesis` may be missing.
67
+
68
+ - **Basic fields**: `cid`, `category`, `title`, `number`, `wording`, `creation_date`, were taken directly from the source XML file.
69
+ - **Generated fields**:
70
+ - `chunk_id`: A unique hash for each text chunk.
71
+ - `chunk_number`: Indicates the order of a chunk within a same deliberation.
72
+ - `content_type`: Nature of the content.
73
+ - `article_number`: Number of the article. Available only if `content_type` is `article`.
74
+ - `article_title`: Title of the article. Available only if `content_type` is `article`.
75
+ - `article_synthesis`: Synthesis of the article extracted from the explanatory memorandum. Available only if `content_type` is `article`.
76
+ - **Textual fields**:
77
+ - `text`: Chunk of the main text content.
78
+ It can be an article text content extracted from the dossier content, a chunk of the explanatory memorandum's introduction or a chunk from the dossier content.
79
+ - `chunk_text`: Combines `title` and the main `text` body to maximize embedding relevance.
80
+ If `content_type` is `article`, then the article number is also added.
81
+
82
+ ### ✂️ 2. Chunk Generation
83
+
84
+ A `chunk_text` was built by combining the `title`, the `article_number` if applicable and its corresponding `text` content section.
85
+ Chunking ensures semantic granularity for embedding purposes.
86
+
87
+ No recursive split was necessary as legal articles and memos are inherently structured and relatively short for `content_type` = `article`.
88
+ If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
89
+
90
+ - `chunk_size` = 8000
91
+ - `chunk_overlap` = 400
92
+ - `length_function` = len
93
+
94
+ ---
95
+
96
+ ### 🧠 3. Embeddings Generation
97
+
98
+ Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
99
+ The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
100
+
101
+ ## 📌 Embedding Use Notice
102
+
103
+ ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
104
+ To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
105
+
106
+ ```python
107
+ import pandas as pd
108
+ import json
109
+
110
+ df = pd.read_parquet("dole-latest.parquet")
111
+ df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
112
+ ```
113
+
114
+ ---
115
+
116
+ ## 📚 Source & License
117
+
118
+ ### 🔗 Source:
119
+ - [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/DOLE)
120
+ - [Data.gouv.fr : DOLE : les dossiers législatifs ](https://www.data.gouv.fr/datasets/dole-les-dossiers-legislatifs/)
121
+
122
+ ### 📄 License:
123
+ **Open License (Etalab)** — This dataset is publicly available and reusable under the Etalab open license.