Datasets:
language:
- fr
tags:
- france
- constitution
- council
- conseil-constitutionnel
- decisions
- justice
- embeddings
- open-data
- government
pretty_name: French Constitutional Council Decisions Dataset
size_categories:
- 10K<n<100K
license: etalab-2.0
configs:
- config_name: latest
data_files: data/constit-latest/*.parquet
default: true
π«π· French Constitutional Council Decisions Dataset (Conseil constitutionnel)
This dataset is a processed and embedded version of all decisions issued by the Conseil constitutionnel (French Constitutional Council) since its creation in 1958. It includes full legal texts of decisions, covering constitutional case law, electoral disputes, and other related matters. The original data is downloaded from the dedicated DILA open data repository and is also published on data.gouv.fr.
The dataset provides semantic-ready, structured and chunked content of constitutional decisions suitable for semantic search, AI legal assistants, or RAG pipelines for example.
Each chunk of text has been vectorized using the BAAI/bge-m3 embedding model.
ποΈ Dataset Contents
The dataset is provided in Parquet format and includes the following columns:
| Column Name | Type | Description |
|---|---|---|
chunk_id |
str |
Unique generated identifier for each text chunk. |
doc_id |
str |
Document identifier from the source site. |
chunk_index |
int |
Index of the chunk within the same document. Starting from 1. |
chunk_xxh64 |
str |
XXH64 hash of the chunk_text value. |
nature |
str |
Nature of the decision (e.g., Non lieu Γ statuer, ConformitΓ©, etc.). |
solution |
str |
Legal outcome or conclusion of the decision. |
title |
str |
Title summarizing the subject matter of the decision. |
number |
str |
Official number of the decision (e.g., 2019-790). |
decision_date |
str |
Date of the decision (format: YYYY-MM-DD). |
text |
str |
Raw full-text content of the chunk. |
chunk_text |
str |
Formatted full chunk including title and text. |
embeddings_bge-m3 |
str |
Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string. |
π οΈ Data Processing Methodology
π₯ 1. Field Extraction
The following fields were extracted and/or transformed from the original source:
Basic fields:
doc_id(cid),title,nature,solution,number, anddecision_dateare extracted directly from the metadata of each decision record.
Generated fields:
chunk_id: a generated unique identifier combining thedoc_idandchunk_index.chunk_index: is the index of the chunk of a same document. Each document has an uniquedoc_id.chunk_xxh64: is the xxh64 hash of thechunk_textvalue. It is useful to determine if thechunk_textvalue has changed from a version to another.
Textual fields:
text: chunk of the main text content.chunk_text: generated by concatenatingtitleandtext.
βοΈ 2. Generation of chunk_text
The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks, which correspond to the text value. The parameters used are :
chunk_size= 1500 (in order to maximize the compability of most LLMs context windows)chunk_overlap= 200length_function= len
The value of chunk_text includes the title and the textual content chunk text. This strategy is designed to improve document search.
π§ 3. Embeddings Generation
Each chunk_text was embedded using the BAAI/bge-m3 model.
The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the embeddings_bge-m3 column.
π Embedding Use Notice
β οΈ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]").
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
dataset = load_dataset("AgentPublic/constit")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
Otherwise, if you have already downloaded all parquet files from the data/constit-latest/ folder :
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
df = pd.read_parquet(path="constit-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
π± GitHub repository :
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository
π Source & License
π Source :
π Licence :
Open License (Etalab) β This dataset is publicly available and can be reused under the conditions of the Etalab open license.