Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,10 @@ pretty_name: Data.gouv.fr Datasets Catalog
|
|
| 14 |
size_categories:
|
| 15 |
- 10K<n<100K
|
| 16 |
license: etalab-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
|
| 19 |
# 🇫🇷 Data.gouv.fr Datasets Catalog
|
|
@@ -101,9 +105,21 @@ Then, only the first splitted text was keeped. Which leads to have a cropped des
|
|
| 101 |
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
|
| 102 |
|
| 103 |
## 📌 Embeddings Notice
|
| 104 |
-
⚠️ The `embeddings_bge-m3` column is stored as a stringified list (e.g., `"[-0.03062629,-0.017049594,...]"`).
|
| 105 |
-
For example, if you want to load the dataset into a dataframe :
|
| 106 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
```python
|
| 108 |
import pandas as pd
|
| 109 |
import json
|
|
@@ -113,6 +129,8 @@ df = pd.read_parquet(path="data-gouv-datasets-catalog-latest/") # Assuming that
|
|
| 113 |
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
|
| 114 |
```
|
| 115 |
|
|
|
|
|
|
|
| 116 |
## 📚 Source & License
|
| 117 |
|
| 118 |
## 🔗 Source :
|
|
|
|
| 14 |
size_categories:
|
| 15 |
- 10K<n<100K
|
| 16 |
license: etalab-2.0
|
| 17 |
+
configs:
|
| 18 |
+
- config_name: latest
|
| 19 |
+
data_files: "data/data-gouv-datasets-catalog-latest/*.parquet"
|
| 20 |
+
default: true
|
| 21 |
---
|
| 22 |
|
| 23 |
# 🇫🇷 Data.gouv.fr Datasets Catalog
|
|
|
|
| 105 |
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
|
| 106 |
|
| 107 |
## 📌 Embeddings Notice
|
| 108 |
+
⚠️ The `embeddings_bge-m3` column is stored as a stringified list (e.g., `"[-0.03062629,-0.017049594,...]"`).
|
| 109 |
+
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
|
| 110 |
|
| 111 |
+
```python
|
| 112 |
+
import pandas as pd
|
| 113 |
+
import json
|
| 114 |
+
from datasets import load_dataset
|
| 115 |
+
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
|
| 116 |
+
|
| 117 |
+
dataset = load_dataset("AgentPublic/data-gouv-datasets-catalog")
|
| 118 |
+
df = pd.DataFrame(dataset['train'])
|
| 119 |
+
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
Otherwise, if you already downloaded all parquet files from the `data/data-gouv-datasets-catalog-latest/` folder :
|
| 123 |
```python
|
| 124 |
import pandas as pd
|
| 125 |
import json
|
|
|
|
| 129 |
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
|
| 130 |
```
|
| 131 |
|
| 132 |
+
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
|
| 133 |
+
|
| 134 |
## 📚 Source & License
|
| 135 |
|
| 136 |
## 🔗 Source :
|