FaheemBEG commited on
Commit
8e2ee27
·
verified ·
1 Parent(s): 1cd99c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -100,16 +100,24 @@ The Langchain's `RecursiveCharacterTextSplitter` function was used to make these
100
 
101
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
102
 
103
- ## 🔄 The chunking doesn't fit your use case?
 
 
104
 
105
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
106
 
107
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
108
 
109
- ## 📌 Embedding Use Notice
 
 
 
 
110
 
111
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
112
- To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
 
 
113
 
114
  ```python
115
  import pandas as pd
@@ -121,8 +129,8 @@ dataset = load_dataset("AgentPublic/service-public")
121
  df = pd.DataFrame(dataset['train'])
122
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
123
  ```
 
124
 
125
- Otherwise, if you have already downloaded all parquet files from the `data/service-public-latest/` folder :
126
  ```python
127
  import pandas as pd
128
  import json
 
100
 
101
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
102
 
103
+ ## 🎓 Tutorials
104
+
105
+ ### 1. 🔄 The chunking doesn't fit your use case?
106
 
107
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
108
 
109
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
110
 
111
+ ### 2. 🤖 How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
112
+
113
+ To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)
114
+
115
+ ### 3. 📌 Embedding Use Notice
116
 
117
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
118
+ To use it as a vector, you need to parse it into a list of floats or NumPy array.
119
+
120
+ #### Using the `datasets` library:
121
 
122
  ```python
123
  import pandas as pd
 
129
  df = pd.DataFrame(dataset['train'])
130
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
131
  ```
132
+ #### Using downloaded local Parquet files:
133
 
 
134
  ```python
135
  import pandas as pd
136
  import json