FaheemBEG commited on
Commit
cea715a
·
verified ·
1 Parent(s): 4a901dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -110,16 +110,24 @@ If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to
110
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
111
  The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
112
 
113
- ## 🔄 The chunking doesn't fit your use case?
 
 
114
 
115
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
116
 
117
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
118
 
119
- ## 📌 Embedding Use Notice
 
 
 
 
120
 
121
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
122
- To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
 
 
123
 
124
  ```python
125
  import pandas as pd
@@ -131,8 +139,8 @@ dataset = load_dataset("AgentPublic/dole")
131
  df = pd.DataFrame(dataset['train'])
132
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
133
  ```
 
134
 
135
- Otherwise, if you have already downloaded all parquet files from the `data/dole-latest/` folder :
136
  ```python
137
  import pandas as pd
138
  import json
 
110
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
111
  The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
112
 
113
+ ## 🎓 Tutorials
114
+
115
+ ### 1. 🔄 The chunking doesn't fit your use case?
116
 
117
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
118
 
119
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
120
 
121
+ ### 2. 🤖 How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
122
+
123
+ To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)
124
+
125
+ ### 3. 📌 Embedding Use Notice
126
 
127
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
128
+ To use it as a vector, you need to parse it into a list of floats or NumPy array.
129
+
130
+ #### Using the `datasets` library:
131
 
132
  ```python
133
  import pandas as pd
 
139
  df = pd.DataFrame(dataset['train'])
140
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
141
  ```
142
+ #### Using downloaded local Parquet files:
143
 
 
144
  ```python
145
  import pandas as pd
146
  import json