File size: 10,512 Bytes
54e7768
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
440eec6
 
 
 
54e7768
22afa37
 
 
 
 
54e7768
22afa37
54e7768
 
 
0a647e3
54e7768
 
 
 
 
 
 
 
 
 
 
 
 
 
5e88c59
 
107699f
54e7768
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7067fba
54e7768
 
 
 
 
 
 
 
0a647e3
7067fba
54e7768
 
 
4bea48b
 
cd3ad60
4bea48b
 
 
54e7768
 
 
7067fba
54e7768
 
 
 
 
 
440eec6
 
54e7768
440eec6
 
 
 
 
 
 
 
 
 
 
7104014
54e7768
 
 
94d2e69
54e7768
94d2e69
54e7768
 
 
440eec6
 
91e0416
 
 
54e7768
 
91e0416
54e7768
0a647e3
54e7768
91e0416
54e7768
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
language:
- fr
tags:
- france
- data-catalog
- data-gouv
- metadata
- open-data
- government
- etalab
- embeddings
pretty_name: Data.gouv.fr Datasets Catalog
size_categories:
- 10K<n<100K
license: etalab-2.0
configs:
- config_name: latest
  data_files: "data/data-gouv-datasets-catalog-latest/*.parquet"
  default: true
---
---------------------------------------------------------------------------------------------------
### 📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech
Vous utilisez ce dataset ou d’autres datasets de notre collection [MediaTech](https://huggingface.co/collections/AgentPublic/mediatech) ? Votre avis compte ! 
Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 
Merci pour votre contribution ! 🙌

---------------------------------------------------------------------------------------------------
# 🇫🇷 Data.gouv.fr Datasets Catalog

This dataset contains a processed and embedded version of the **catalog of datasets published on [data.gouv.fr](https://www.data.gouv.fr)**, the French open data platform.
The dataset was published by data.gouv.fr on the [dedicated dataset page](https://www.data.gouv.fr/datasets/catalogue-des-donnees-de-data-gouv-fr/).

It includes rich metadata about each public dataset: title, URL, publisher organization, description, tags, licensing, update frequency, usage metrics, and more.
The dataset provides semantic-ready and structured for semantic indexing and retrieval. 

Each chunk has been embedded using the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model, making this catalog ready for search, classification, or retrieval-augmented generation (RAG) pipelines.


## 🗂️ Dataset Contents

The dataset is provided in **Parquet format** and contains the following columns:

| Column Name                   | Type               | Description                                                               |
|------------------------------|-------------------|-----------------------------------------------------------------------------|
| `chunk_id`                   | `str`             | Unique source based identifier of the chunk                                 |
| `doc_id`                     | `str`             | Document identifier from the source site (slug).                            |
| `chunk_xxh64`                | `str`             | XXH64 hash of the `chunk_text` value.                                       |
| `title`                      | `str`             | Title of the dataset.                                                       |
| `acronym`                    | `str`             | Acronym of the dataset (if available).                                      |
| `url`                        | `str`             | URL of the dataset on data.gouv.fr.                                         |
| `organization`               | `str`             | Name of the associated organization.                                        |
| `organization_id`            | `str`             | Unique ID of the organization.                                              |
| `owner`                      | `str`             | Name of the dataset owner.                                                  |
| `owner_id`                   | `str`             | Unique ID of the dataset owner.                                             |
| `description`                | `str`             | Full description of the dataset.                                            |
| `frequency`                  | `str`             | Update frequency of the dataset.                                            |
| `license`                    | `str`             | License type (e.g. Etalab-2.0).                                             |
| `temporal_coverage_start`    | `str`             | Start of the temporal coverage (if applicable).                             |
| `temporal_coverage_end`      | `str`             | End of the temporal coverage (if applicable).                               |
| `spatial_granularity`        | `str`             | Spatial granularity level (e.g. country, region, etc.).                     |
| `spatial_zones`              | `str`             | Names of the spatial zone covered.                                          |
| `featured`                   | `bool`            | Whether the dataset is featured.                                            |
| `created_at`                 | `str`             | Dataset creation date (standard ISO 8601).                                  |
| `last_modified`              | `str`             | Last modification date (standard ISO 8601).                                 |
| `tags`                       | `str`             | Comma-separated list of tags associated with the dataset.                   |
| `archived`                   | `str`             | Whether the dataset is archived.                                            |
| `resources_count`            | `int`             | Total number of attached resources.                                         |
| `main_resources_count`       | `int`             | Number of primary resources.                                                |
| `resources_formats`          | `str`             | Formats used by the dataset's ressources (e.g., CSV, JSON, PDF)             |
| `harvest_backend`            | `str`             | Name of the harvest backend.                                                |
| `harvest_domain`             | `str`             | Domain source of the harvested dataset.                                     |
| `harvest_created_at`         | `str`             | Harvest creation date (standard ISO 8601).                                  |
| `harvest_modified_at`        | `str`             | Harvest last update date (standard ISO 8601).                               |
| `harvest_remote_url`         | `str`             | Remote source URL of the harvested dataset.                                 |
| `quality_score`              | `float`           | Quality score assigned by data.gouv.fr.                                     |
| `metric_discussions`         | `int`             | Number of discussions related to the dataset.                               |
| `metric_reuses`              | `int`             | Number of declared reuses.                                                  |
| `metric_reuses_by_months`    | `str`             | Monthly reuse statistics (as JSON string).                                  |
| `metric_followers`           | `int`             | Number of users following the dataset.                                      |
| `metric_followers_by_months` | `str`             | Monthly follower statistics (as JSON string or number).                     |
| `metric_views`               | `int`             | Number of views.                                                            |
| `metric_resources_downloads` | `float`           | Number of resource downloads.                                               |
| `chunk_text`                 | `str`             | Text used for semantic embedding (title + organization + cropped description).|
| `embeddings_bge-m3`          | `str` (stringified list) | Embedding of `chunk_text` using `BAAI/bge-m3`. Stored as JSON array string. |

---

## 🛠️ Data Processing Methodology

### 📥 1. Field Extraction

The original dataset was retrieved directly from the official [dedicated dataset page](https://www.data.gouv.fr/datasets/catalogue-des-donnees-de-data-gouv-fr/).
This dataset only includes data.gouv.fr datasets that have at least a 100 characters description to remove as much noise as possible from incomplete datasets.

### ✂️ 2. Text Chunking

The `chunk_text` field was created by combining the `title`, `organization` name, `description`. 

The `description` was here cropped to a maximum length of +- 1000 characters.
The Langchain's `RecursiveCharacterTextSplitter` function was used to crop the description. 

The parameters used are : 
- `chunk_size` = 1000 (in order to limit as much noise as possible)
- `length_function` = len

Then, only the first splitted text was keeped. Which leads to have a cropped description of a maximum of +- 1000 characters.

### 🧠 3. Embedding Generation

Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.

## 📌 Embeddings Notice
⚠️ The `embeddings_bge-m3` column is stored as a stringified list (e.g., `"[-0.03062629,-0.017049594,...]"`). 
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:

```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/data-gouv-datasets-catalog")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

Otherwise, if you have already downloaded all parquet files from the `data/data-gouv-datasets-catalog-latest/` folder : 
```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="data-gouv-datasets-catalog-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

## 🐱 GitHub repository : 
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)

## 📚 Source & License

### 🔗 Source :

- [Data.gouv.fr : Catalogue des données de data.gouv.fr](https://www.data.gouv.fr/datasets/catalogue-des-donnees-de-data-gouv-fr/) 

### 📄 Licence :
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.