File size: 6,900 Bytes
da19af5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6750192
 
 
 
da19af5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c627af6
6352d21
c627af6
da19af5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c627af6
da19af5
454da1b
c627af6
 
da19af5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6750192
da19af5
6750192
 
 
 
 
 
 
 
 
 
 
4fe56af
da19af5
 
 
5a13aad
da19af5
5a13aad
da19af5
 
 
6750192
 
fbcf7c1
 
 
da19af5
 
fbcf7c1
da19af5
 
 
fbcf7c1
da19af5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
language:
- fr
tags:
- france
- cnil
- loi
- deliberations
- decisions
- embeddings
- open-data
- government
pretty_name: CNIL Deliberations Dataset
size_categories:
- 10K<n<100K
license: etalab-2.0
configs:
- config_name: latest
  data_files: "data/cnil-latest/*.parquet"
  default: true
---

# 🇫🇷 CNIL Deliberations Dataset

This dataset is a processed and embedded version of the official deliberations and decisions published by the **CNIL** (Commission Nationale de l’Informatique et des Libertés), the French data protection authority.  
It includes a variety of legal documents such as opinions, recommendations, simplified norms, general authorizations, and formal decisions.

The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL/) and the dataset is also [available in data.gouv.fr (Les délibérations de la CNIL)](https://www.data.gouv.fr/datasets/les-deliberations-de-la-cnil/) . 
The dataset provides semantic-ready, structured and chunked data making the dataset suitable for **semantic search**, **AI legal assistants**, or **RAG pipelines** for example.
These chunks have then been embedded using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) model.

---

## 🗂️ Dataset Contents

The dataset is provided in **Parquet format** and includes the following columns:

| Column Name        | Type             | Description                                                                 |
|--------------------|------------------|-----------------------------------------------------------------------------|
| `chunk_id`         | `str`            | Unique identifier for each chunk.                                           |
| `doc_id`           | `str`            | Document identifier of the deliberation.                                    |
| `chunk_index`     | `int`             | Index of the chunk within the same deliberation document. Starting from 1.  |
| `chunk_xxh64`      | `str`            | XXH64 hash of the `chunk_text` value.                                       |
| `nature`           | `str`            | Type of act (e.g., deliberation, decision...).                              |
| `status`           | `str`            | Status of the document (e.g., vigueur, vigueur_diff).                       |
| `nature_delib`     | `str`            | Specific nature of the deliberation.                                        |
| `title`            | `str`            | Title of the deliberation or decision.                                      |
| `full_title`       | `str`            | Full title of the deliberation or decision.                                 |
| `number`           | `str`            | Official reference number.                                                  |
| `date`             | `str`            | Date of publication (format: YYYY-MM-DD).                                   |
| `text`             | `str`            | Raw text content of the chunk extracted from the deliberation or decision   |
| `chunk_text`       | `str`            | Formatted text chunk used for embedding (includes title + content).         |
| `embeddings_bge-m3`| `str`            | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON string.|

---

## 🛠️ Data Processing Methodology

### 1. 📥 Field Extraction

Data was extracted from the [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL/).  
The following transformations were applied:

- **Basic fields**: `doc_id` (cid), `title`, `full_title`, `number`, `date`, `nature`, `status`, `nature_delib`, were taken directly from the source XML file.
- **Generated fields**:
  - `chunk_id`: a generated unique identifier combining the `doc_id` and `chunk_index`.
  - `chunk_index`: is the index of the chunk of a same deliberation document. Each document has an unique `doc_id`.
  - `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another. 
- **Textual fields**:
  - `text`: Chunk of the main text content.
  - `chunk_text`: Combines `title` and the main `text` body to maximize embedding relevance.

### 2. ✂️ Text Chunking

The value includes the `title` and the textual content chunk `text`.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.

The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are : 
- `chunk_size` = 1500 (in order to maximize the compability of most LLMs context windows)
- `chunk_overlap` = 200
- `length_function` = len

### 🧠 3. Embeddings Generation

Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.

## 📌 Embedding Use Notice

⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`). 
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:

```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/cnil")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

Otherwise, if you have already downloaded all parquet files from the `data/cnil-latest/` folder : 
```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="cnil-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

## 🐱 GitHub repository : 
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)

## 📚 Source & License

### 🔗 Source : 
- [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CNIL)
- [Data.gouv.fr : Les délibérations de la CNIL](https://www.data.gouv.fr/datasets/les-deliberations-de-la-cnil/)

### 📄 Licence : 
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.