FaheemBEG commited on
Commit
da19af5
·
verified ·
1 Parent(s): 5a3e6a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ tags:
5
+ - france
6
+ - cnil
7
+ - loi
8
+ - deliberations
9
+ - decisions
10
+ - embeddings
11
+ - open-data
12
+ - government
13
+ pretty_name: CNIL Deliberations Dataset
14
+ size_categories:
15
+ - 10K<n<100K
16
+ license: etalab-2.0
17
+ ---
18
+
19
+ # 🇫🇷 CNIL Deliberations Dataset
20
+
21
+ This dataset is a processed and embedded version of the official deliberations and decisions published by the **CNIL** (Commission Nationale de l’Informatique et des Libertés), the French data protection authority.
22
+ It includes a variety of legal documents such as opinions, recommendations, simplified norms, general authorizations, and formal decisions.
23
+
24
+ The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL/) and the dataset is also [available in data.gouv.fr (Les délibérations de la CNIL)](https://www.data.gouv.fr/datasets/les-deliberations-de-la-cnil/) .
25
+ The dataset provides semantic-ready, structured and chunked data making the dataset suitable for **semantic search**, **AI legal assistants**, or **RAG pipelines** for example.
26
+ These chunks have then been embedded using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) model.
27
+
28
+ ---
29
+
30
+ ## 🗂️ Dataset Contents
31
+
32
+ The dataset is provided in **Parquet format** and includes the following columns:
33
+
34
+ | Column Name | Type | Description |
35
+ |--------------------|------------------|-----------------------------------------------------------------------------|
36
+ | `chunk_id` | `str` | Unique identifier for each chunk. |
37
+ | `cid` | `str` | Internal identifier of the deliberation. |
38
+ | `chunk_number` | `int` | Index of the chunk within the same deliberation document. |
39
+ | `nature` | `str` | Type of act (e.g., deliberation, decision...). |
40
+ | `status` | `str` | Status of the document (e.g., vigueur, vigueur_diff). |
41
+ | `nature_delib` | `str` | Specific nature of the deliberation. |
42
+ | `title` | `str` | Title of the deliberation or decision. |
43
+ | `full_title` | `str` | Full title of the deliberation or decision. |
44
+ | `number` | `str` | Official reference number. |
45
+ | `date` | `str` | Date of publication (format: YYYY-MM-DD). |
46
+ | `text` | `str` | Raw text content of the chunk extracted from the deliberation or decision |
47
+ | `chunk_text` | `str` | Formatted text chunk used for embedding (includes title + content). |
48
+ | `embeddings_bge-m3`| `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON string.|
49
+
50
+ ---
51
+
52
+ ## 🛠️ Data Processing Methodology
53
+
54
+ ### 1. 📥 Field Extraction
55
+
56
+ Data was extracted from the [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL/).
57
+ The following transformations were applied:
58
+
59
+ - **Basic fields**: `cid`, `title`, `full_title`, `number`, `date`, `nature`, `status`, `nature_delib`, were taken directly from the source XML file.
60
+ - **Generated fields**:
61
+ - `chunk_id`: A unique hash for each text chunk.
62
+ - `chunk_number`: Indicates the order of a chunk within a same deliberation.
63
+ - **Textual fields**:
64
+ - `text`: Chunk of the main text content.
65
+ - `chunk_text`: Combines `title` and the main `text` body to maximize embedding relevance.
66
+
67
+ ### 2. ✂️ Text Chunking
68
+
69
+ The value includes the `title` and the textual content chunk `text`.
70
+ This strategy is designed to improve semantic search for document search use cases on administrative procedures.
71
+
72
+ The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
73
+ - `chunk_size` = 1500 (in order to maximize the compability of most LLMs context windows)
74
+ - `chunk_overlap` = 200
75
+ - `length_function` = len
76
+
77
+ ### 🧠 3. Embeddings Generation
78
+
79
+ Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
80
+
81
+ ## 📌 Embedding Use Notice
82
+
83
+ ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
84
+ To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
85
+
86
+ ```python
87
+ import pandas as pd
88
+ import json
89
+
90
+ df = pd.read_parquet("cnil-latest.parquet")
91
+ df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
92
+ ```
93
+
94
+ ## 📚 Source & License
95
+
96
+ ## 🔗 Source :
97
+ - [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL)
98
+ - [Data.gouv.fr : Les délibérations de la CNIL](https://www.data.gouv.fr/datasets/les-deliberations-de-la-cnil/)
99
+
100
+ ## 📄 Licence :
101
+ **Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.