language:
- fr
tags:
- france
- travail
- emploi
- embeddings
- open-data
- government
pretty_name: French Minister of Labor and Employment's website Dataset (Travail Emploi)
size_categories:
- 1K<n<10K
license: etalab-2.0
configs:
- config_name: latest
data_files: data/travail-emploi-latest/*.parquet
default: true
๐ซ๐ท Travail Emploi website Dataset (French Minister of Labor and Employment)
This dataset is a processed and embedded version of public practical information sheets extracted from the official website of Ministรจre du Travail et de lโEmploi (Minister of Labor and Employment): travail-emploi.gouv.fr. These datas are downloaded from the government Social Gouv GitHub repository.
The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures.
These chunks have been vectorized using the BAAI/bge-m3 embedding model to enable semantic search and retrieval tasks.
๐๏ธ Dataset Contents
The dataset is provided in Parquet format and includes the following columns:
| Column Name | Type | Description |
|---|---|---|
chunk_id |
str |
Unique generated and encoded hash of each chunk. |
doc_id |
str |
Document identifier from the source site. |
chunk_index |
int |
Index of the chunk within its original document. Starting from 1. |
chunk_xxh64 |
str |
XXH64 hash of the chunk_text value. |
title |
str |
Title of the article. |
surtitre |
str |
Broader theme. (always "Travail-Emploi" in this dataset). |
source |
str |
Dataset source label. (always "travail-emploi" in this dataset) |
introduction |
str |
Introductory paragraph of the article. |
date |
str |
Publication or last update date (format: DD/MM/YYYY). |
url |
str |
URL of the original article. |
context |
list[str] |
Section names related to the chunk. |
text |
str |
Textual content extracted and chunked from a section of the article. |
chunk_text |
str |
Formated text including title, context, introduction and text values for embedding |
embeddings_bge-m3 |
str |
Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string. |
๐ ๏ธ Data Processing Methodology
๐ฅ 1. Field Extraction
The following fields were extracted and/or transformed from the original JSON:
- Basic fields:
sid(i.e. 'pubID'),title,introduction(i.e. 'intro'),date,urlare directly extracted from JSON attributes. - Generated fields:
chunk_id: is an unique generated and encoded hash for each chunk.chunk_index: is the index of the chunk of a same document. Each document has an uniquedoc_id.chunk_xxh64: is the xxh64 hash of thechunk_textvalue. It is useful to determine if thechunk_textvalue has changed from a version to another.source: is always "travail-emploi" here.surtitre: is always "Travail-Emploi" here.
- Textual fields:
context: Optional contextual hierarchy (e.g., nested sections).text: Textual content of the article chunk. This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a samesid.
Columns source and surtitre are fixed variables here because this dataset was built at the same time as the Service Public dataset.
Both datasets were intended to be grouped together in a single vector collection, they then have differents source and surtitre values.
โ๏ธ 2. Generation of 'chunk_text'
The value includes the title and introduction of the article, the context values of the chunk and the textual content chunk text.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.
The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :
chunk_size= 1500 (in order to maximize the compability of most LLMs context windows)chunk_overlap= 20length_function= len
๐ง 3. Embeddings Generation
Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.
๐ Embedding Use Notice
โ ๏ธ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]").
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
dataset = load_dataset("AgentPublic/travail-emploi")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
Otherwise, if you have already downloaded all parquet files from the data/travail-emploi-latest/ folder :
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
df = pd.read_parquet(path="travail-emploi-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
๐ฑ GitHub repository :
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository
๐ Source & License
๐ Source :
๐ Licence :
Open License (Etalab) โ This dataset is publicly available and can be reused under the conditions of the Etalab open license.