FaheemBEG commited on
Commit
57395bb
·
verified ·
1 Parent(s): dc07041

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -17,7 +17,7 @@ size_categories:
17
  license: etalab-2.0
18
  configs:
19
  - config_name: latest
20
- data_files: data/legi-latest/*.parquet
21
  default: true
22
  ---
23
 
@@ -36,6 +36,8 @@ In this version, only versions of articles that are currently **in force** (`VIG
36
 
37
  Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example.
38
 
 
 
39
  ---
40
 
41
  ## 🗂️ Dataset Contents
@@ -129,12 +131,12 @@ import json
129
  from datasets import load_dataset
130
  # The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
131
 
132
- dataset = load_dataset("AgentPublic/legi")
133
  df = pd.DataFrame(dataset['train'])
134
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
135
  ```
136
 
137
- Otherwise, if you have already downloaded all parquet files from the `data/legi-latest/` folder :
138
  ```python
139
  import pandas as pd
140
  import json
 
17
  license: etalab-2.0
18
  configs:
19
  - config_name: latest
20
+ data_files: "data/legi-latest/*/*.parquet"
21
  default: true
22
  ---
23
 
 
36
 
37
  Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example.
38
 
39
+ The dataset is splitted in subfolders by 'category' and 'CODE', so that the dataset is more usable for specifics use cases.
40
+
41
  ---
42
 
43
  ## 🗂️ Dataset Contents
 
131
  from datasets import load_dataset
132
  # The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
133
 
134
+ dataset = load_dataset("AgentPublic/legi") # Loading the full dataset
135
  df = pd.DataFrame(dataset['train'])
136
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
137
  ```
138
 
139
+ Otherwise, if you have already downloaded some parquet files from the `data/legi-latest/` folder :
140
  ```python
141
  import pandas as pd
142
  import json