Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ pipeline_tag: sentence-similarity
|
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
| 14 |
-
BertChunker is a trained chunker for chunking text for RAG. It was trained based on [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) with
|
| 15 |
|
| 16 |
This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
|
| 17 |
|
|
@@ -65,7 +65,7 @@ for i, c in enumerate(chunks):
|
|
| 65 |
print(f'-----chunk: {i}------------')
|
| 66 |
print(c)
|
| 67 |
|
| 68 |
-
# chunk the text faster
|
| 69 |
print('----->Here is the result of fast chunk method<------:')
|
| 70 |
chunks=model.chunk_text_fast(text, tokenizer, batchsize=20, threshold=0)
|
| 71 |
|
|
|
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
| 14 |
+
BertChunker is a trained chunker for chunking text for RAG. It was trained based on [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) with a classifier head to predict the start token of chunks. The whole training lasted for 10 minutes on a Nvidia P40 GPU on a 50 MB synthetized dataset.
|
| 15 |
|
| 16 |
This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
|
| 17 |
|
|
|
|
| 65 |
print(f'-----chunk: {i}------------')
|
| 66 |
print(c)
|
| 67 |
|
| 68 |
+
# chunk the text faster, by using a fixed context window, batchsize is the number of windows run per batch.
|
| 69 |
print('----->Here is the result of fast chunk method<------:')
|
| 70 |
chunks=model.chunk_text_fast(text, tokenizer, batchsize=20, threshold=0)
|
| 71 |
|