FaheemBEG commited on
Commit
2cb555c
·
verified ·
1 Parent(s): 1bd98da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -92,16 +92,24 @@ The value of `chunk_text` includes the `title` and the textual content chunk `te
92
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
93
  The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the `embeddings_bge-m3` column.
94
 
95
- ## 🔄 The chunking doesn't fit your use case?
 
 
96
 
97
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
98
 
99
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
100
 
101
- ## 📌 Embedding Use Notice
 
 
 
 
102
 
103
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
104
- To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
 
 
105
 
106
  ```python
107
  import pandas as pd
@@ -113,8 +121,8 @@ dataset = load_dataset("AgentPublic/constit")
113
  df = pd.DataFrame(dataset['train'])
114
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
115
  ```
 
116
 
117
- Otherwise, if you have already downloaded all parquet files from the `data/constit-latest/` folder :
118
  ```python
119
  import pandas as pd
120
  import json
 
92
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
93
  The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the `embeddings_bge-m3` column.
94
 
95
+ ## 🎓 Tutorials
96
+
97
+ ### 1. 🔄 The chunking doesn't fit your use case?
98
 
99
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
100
 
101
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
102
 
103
+ ### 2. 🤖 How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
104
+
105
+ To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)
106
+
107
+ ### 3. 📌 Embedding Use Notice
108
 
109
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
110
+ To use it as a vector, you need to parse it into a list of floats or NumPy array.
111
+
112
+ #### Using the `datasets` library:
113
 
114
  ```python
115
  import pandas as pd
 
121
  df = pd.DataFrame(dataset['train'])
122
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
123
  ```
124
+ #### Using downloaded local Parquet files:
125
 
 
126
  ```python
127
  import pandas as pd
128
  import json