FaheemBEG commited on
Commit
6381eae
·
verified ·
1 Parent(s): 074e78b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -100,7 +100,7 @@ No recursive split was necessary as legal articles and memos are inherently stru
100
  If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
101
 
102
  - `chunk_size` = 8000
103
- - `chunk_overlap` = 400
104
  - `length_function` = len
105
 
106
  ---
@@ -112,7 +112,7 @@ The resulting embedding vector is stored in the `embeddings_bge-m3` column as a
112
 
113
  ## 🔄 The chunking doesn't fit your use case?
114
 
115
- [**SOON AVAILABLE FOR THIS DATASET**] ~~If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).~~
116
 
117
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
118
 
 
100
  If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
101
 
102
  - `chunk_size` = 8000
103
+ - `chunk_overlap` = 0
104
  - `length_function` = len
105
 
106
  ---
 
112
 
113
  ## 🔄 The chunking doesn't fit your use case?
114
 
115
+ If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
116
 
117
  ⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
118