travail-emploi / README.md
FaheemBEG's picture
Update README.md
1ec6421 verified
---
language:
- fr
tags:
- france
- travail
- emploi
- embeddings
- open-data
- government
pretty_name: French Minister of Labor and Employment's website Dataset (Travail Emploi)
size_categories:
- 1K<n<10K
license: etalab-2.0
configs:
- config_name: latest
data_files: "data/travail-emploi-latest/*.parquet"
default: true
---
---------------------------------------------------------------------------------------------------
### 📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech
Vous utilisez ce dataset ou d’autres datasets de notre collection [MediaTech](https://huggingface.co/collections/AgentPublic/mediatech) ? Votre avis compte !
Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11
Merci pour votre contribution ! 🙌
---------------------------------------------------------------------------------------------------
# 🇫🇷 Travail Emploi website Dataset (French Minister of Labor and Employment)
This dataset is a processed and embedded version of public practical information sheets extracted from the official website of **Ministère du Travail et de l’Emploi** (Minister of Labor and Employment): [travail-emploi.gouv.fr](https://travail-emploi.gouv.fr/).
These datas are downloaded from the government [Social Gouv GitHub repository](https://github.com/SocialGouv/fiches-travail-data).
The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures.
These chunks have been vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model to enable semantic search and retrieval tasks.
---
## 🗂️ Dataset Contents
The dataset is provided in **Parquet format** and includes the following columns:
| Column Name | Type | Description |
|----------------|------------------|-----------------------------------------------------------------------------|
| `chunk_id` | `str` | Unique generated and encoded hash of each chunk. |
| `doc_id` | `str` | Document identifier from the source site. |
| `chunk_index` | `int` | Index of the chunk within its original document. Starting from 1. |
| `chunk_xxh64` | `str` | XXH64 hash of the `chunk_text` value. |
| `title` | `str` | Title of the article. |
| `surtitre` | `str` | Broader theme. (always "Travail-Emploi" in this dataset). |
| `source` | `str` | Dataset source label. (always "travail-emploi" in this dataset) |
| `introduction` | `str` | Introductory paragraph of the article. |
| `date` | `str` | Publication or last update date (format: DD/MM/YYYY). |
| `url` | `str` | URL of the original article. |
| `context` | `list[str]` | Section names related to the chunk. |
| `text` | `str` | Textual content extracted and chunked from a section of the article. |
| `chunk_text` | `str` | Formated text including `title`, `context`, `introduction` and `text` values for embedding |
| `embeddings_bge-m3` | `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON array string. |
---
## 🛠️ Data Processing Methodology
### 📥 1. Field Extraction
The following fields were extracted and/or transformed from the original JSON:
- **Basic fields**: `sid` (i.e. 'pubID'), `title`, `introduction` (i.e. 'intro'), `date`, `url` are directly extracted from JSON attributes.
- **Generated fields**:
- `chunk_id`: is an unique generated and encoded hash for each chunk.
- `chunk_index`: is the index of the chunk of a same document. Each document has an unique `doc_id`.
- `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another.
- `source`: is always "travail-emploi" here.
- `surtitre`: is always "Travail-Emploi" here.
- **Textual fields**:
- `context`: Optional contextual hierarchy (e.g., nested sections).
- `text`: Textual content of the article chunk.
This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same `sid`.
Columns `source` and `surtitre` are fixed variables here because this dataset was built at the same time as the [Service Public dataset](https://huggingface.co/datasets/AgentPublic/service-public).
Both datasets were intended to be grouped together in a single vector collection, they then have differents `source` and `surtitre` values.
### ✂️ 2. Generation of 'chunk_text'
The value includes the `title` and `introduction` of the article, the `context` values of the chunk and the textual content chunk `text`.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
- `chunk_size` = 1500 (in order to maximize the compability of most LLMs context windows)
- `chunk_overlap` = 20
- `length_function` = len
### 🧠 3. Embeddings Generation
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
## 📌 Embedding Use Notice
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
dataset = load_dataset("AgentPublic/travail-emploi")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```
Otherwise, if you have already downloaded all parquet files from the `data/travail-emploi-latest/` folder :
```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
df = pd.read_parquet(path="travail-emploi-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
## 🐱 GitHub repository :
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)
## 📚 Source & License
### 🔗 Source :
- [Travail Emploi official website](https://travail-emploi.gouv.fr/)
- [Government official Social Gouv GitHub repository](https://github.com/SocialGouv/fiches-travail-data)
### 📄 Licence :
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.