Update README.md
Browse files
README.md
CHANGED
|
@@ -100,7 +100,7 @@ No recursive split was necessary as legal articles and memos are inherently stru
|
|
| 100 |
If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
|
| 101 |
|
| 102 |
- `chunk_size` = 8000
|
| 103 |
-
- `chunk_overlap` =
|
| 104 |
- `length_function` = len
|
| 105 |
|
| 106 |
---
|
|
@@ -112,7 +112,7 @@ The resulting embedding vector is stored in the `embeddings_bge-m3` column as a
|
|
| 112 |
|
| 113 |
## 🔄 The chunking doesn't fit your use case?
|
| 114 |
|
| 115 |
-
|
| 116 |
|
| 117 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 118 |
|
|
|
|
| 100 |
If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
|
| 101 |
|
| 102 |
- `chunk_size` = 8000
|
| 103 |
+
- `chunk_overlap` = 0
|
| 104 |
- `length_function` = len
|
| 105 |
|
| 106 |
---
|
|
|
|
| 112 |
|
| 113 |
## 🔄 The chunking doesn't fit your use case?
|
| 114 |
|
| 115 |
+
If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
|
| 116 |
|
| 117 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 118 |
|