FaheemBEG commited on
Commit
2ec86df
·
verified ·
1 Parent(s): 83db25b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -90,16 +90,24 @@ The Langchain's `RecursiveCharacterTextSplitter` function was used to make these
90
 
91
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
92
 
93
- ## 🔄 The chunking doesn't fit your use case?
 
 
94
 
95
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
96
 
97
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
98
 
99
- ## 📌 Embedding Use Notice
 
 
 
 
100
 
101
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
102
- To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:
 
 
103
 
104
  ```python
105
  import pandas as pd
@@ -111,8 +119,8 @@ dataset = load_dataset("AgentPublic/cnil")
111
  df = pd.DataFrame(dataset['train'])
112
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
113
  ```
 
114
 
115
- Otherwise, if you have already downloaded all parquet files from the `data/cnil-latest/` folder :
116
  ```python
117
  import pandas as pd
118
  import json
 
90
 
91
  Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
92
 
93
+ ## 🎓 Tutorials
94
+
95
+ ### 1. 🔄 The chunking doesn't fit your use case?
96
 
97
  If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
98
 
99
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
100
 
101
+ ### 2. 🤖 How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
102
+
103
+ To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)
104
+
105
+ ### 3. 📌 Embedding Use Notice
106
 
107
  ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
108
+ To use it as a vector, you need to parse it into a list of floats or NumPy array.
109
+
110
+ #### Using the `datasets` library:
111
 
112
  ```python
113
  import pandas as pd
 
119
  df = pd.DataFrame(dataset['train'])
120
  df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
121
  ```
122
+ #### Using downloaded local Parquet files:
123
 
 
124
  ```python
125
  import pandas as pd
126
  import json