Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -81,8 +81,8 @@ The following fields were extracted and/or transformed from the original source:
|
|
| 81 |
|
| 82 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
|
| 83 |
|
| 84 |
-
- `chunk_size` = 1500
|
| 85 |
-
- `chunk_overlap` =
|
| 86 |
- `length_function` = len
|
| 87 |
|
| 88 |
The value of `chunk_text` includes the `title` and the textual content chunk `text`. This strategy is designed to improve document search.
|
|
@@ -94,7 +94,7 @@ The resulting embedding is stored as a JSON stringified array of 1024 floating p
|
|
| 94 |
|
| 95 |
## 🔄 The chunking doesn't fit your use case?
|
| 96 |
|
| 97 |
-
|
| 98 |
|
| 99 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 100 |
|
|
|
|
| 81 |
|
| 82 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
|
| 83 |
|
| 84 |
+
- `chunk_size` = 1500
|
| 85 |
+
- `chunk_overlap` = 0
|
| 86 |
- `length_function` = len
|
| 87 |
|
| 88 |
The value of `chunk_text` includes the `title` and the textual content chunk `text`. This strategy is designed to improve document search.
|
|
|
|
| 94 |
|
| 95 |
## 🔄 The chunking doesn't fit your use case?
|
| 96 |
|
| 97 |
+
If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
|
| 98 |
|
| 99 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 100 |
|