Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ license: etalab-2.0
|
|
| 19 |
# 🇫🇷 Data.gouv.fr Datasets Catalog
|
| 20 |
|
| 21 |
This dataset contains a processed and embedded version of the **catalog of datasets published on [data.gouv.fr](https://www.data.gouv.fr)**, the French open data platform.
|
| 22 |
-
The dataset was published by data.gouv.fr on
|
| 23 |
|
| 24 |
It includes rich metadata about each public dataset: title, URL, publisher organization, description, tags, licensing, update frequency, usage metrics, and more.
|
| 25 |
The dataset provides semantic-ready and structured for semantic indexing and retrieval.
|
|
@@ -70,7 +70,7 @@ The dataset is provided in **Parquet format** and contains the following columns
|
|
| 70 |
| `metric_followers_by_months` | `str` | Monthly follower statistics (as JSON string or number). |
|
| 71 |
| `metric_views` | `int` | Number of views. |
|
| 72 |
| `metric_resources_downloads` | `float` | Number of resource downloads. |
|
| 73 |
-
| `chunk_text` | `str` | Text used for semantic embedding (title + organization +
|
| 74 |
| `embeddings_bge-m3` | `str` (stringified list) | Embedding of `chunk_text` using `BAAI/bge-m3`. Stored as JSON array string. |
|
| 75 |
|
| 76 |
---
|
|
@@ -79,14 +79,14 @@ The dataset is provided in **Parquet format** and contains the following columns
|
|
| 79 |
|
| 80 |
### 📥 1. Field Extraction
|
| 81 |
|
| 82 |
-
The original dataset was retrieved directly from
|
| 83 |
-
This dataset only includes data.gouv.fr datasets that have at least a
|
| 84 |
|
| 85 |
### ✂️ 2. Text Chunking
|
| 86 |
|
| 87 |
The `chunk_text` field was created by combining the `title`, `organization` name, `description`.
|
| 88 |
|
| 89 |
-
The `description` was
|
| 90 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to crop the description.
|
| 91 |
|
| 92 |
The parameters used are :
|
|
@@ -94,7 +94,7 @@ The parameters used are :
|
|
| 94 |
- `chunk_overlap` = 20
|
| 95 |
- `length_function` = len
|
| 96 |
|
| 97 |
-
Then, only the first splitted text was keeped. Which leads to have a cropped description of a maximum of +- 1000
|
| 98 |
|
| 99 |
### 🧠 3. Embedding Generation
|
| 100 |
|
|
|
|
| 19 |
# 🇫🇷 Data.gouv.fr Datasets Catalog
|
| 20 |
|
| 21 |
This dataset contains a processed and embedded version of the **catalog of datasets published on [data.gouv.fr](https://www.data.gouv.fr)**, the French open data platform.
|
| 22 |
+
The dataset was published by data.gouv.fr on its [dedicated dataset page](https://www.data.gouv.fr/datasets/catalogue-des-donnees-de-data-gouv-fr/).
|
| 23 |
|
| 24 |
It includes rich metadata about each public dataset: title, URL, publisher organization, description, tags, licensing, update frequency, usage metrics, and more.
|
| 25 |
The dataset provides semantic-ready and structured for semantic indexing and retrieval.
|
|
|
|
| 70 |
| `metric_followers_by_months` | `str` | Monthly follower statistics (as JSON string or number). |
|
| 71 |
| `metric_views` | `int` | Number of views. |
|
| 72 |
| `metric_resources_downloads` | `float` | Number of resource downloads. |
|
| 73 |
+
| `chunk_text` | `str` | Text used for semantic embedding (title + organization + cropped description).|
|
| 74 |
| `embeddings_bge-m3` | `str` (stringified list) | Embedding of `chunk_text` using `BAAI/bge-m3`. Stored as JSON array string. |
|
| 75 |
|
| 76 |
---
|
|
|
|
| 79 |
|
| 80 |
### 📥 1. Field Extraction
|
| 81 |
|
| 82 |
+
The original dataset was retrieved directly from its official [dataset page](https://www.data.gouv.fr/datasets/catalogue-des-donnees-de-data-gouv-fr/).
|
| 83 |
+
This dataset only includes data.gouv.fr datasets that have at least a 100 characters description to remove as much noise as possible from incomplete datasets.
|
| 84 |
|
| 85 |
### ✂️ 2. Text Chunking
|
| 86 |
|
| 87 |
The `chunk_text` field was created by combining the `title`, `organization` name, `description`.
|
| 88 |
|
| 89 |
+
The `description` was cropped to a maximum length of +- 1000 characters.
|
| 90 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to crop the description.
|
| 91 |
|
| 92 |
The parameters used are :
|
|
|
|
| 94 |
- `chunk_overlap` = 20
|
| 95 |
- `length_function` = len
|
| 96 |
|
| 97 |
+
Then, only the first splitted text was keeped. Which leads to have a cropped description of a maximum of +- 1000 characters.
|
| 98 |
|
| 99 |
### 🧠 3. Embedding Generation
|
| 100 |
|