Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ pipeline_tag: token-classification
|
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
| 14 |
-
BertChunker is a text chunker based on BERT with a classifier head to predict the start token of chunks (for use in RAG, etc). It was finetuned on [
|
| 15 |
|
| 16 |
This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
|
| 17 |
|
|
@@ -25,7 +25,7 @@ from modeling_bertchunker import BertChunker
|
|
| 25 |
|
| 26 |
# load bert tokenizer
|
| 27 |
tokenizer = AutoTokenizer.from_pretrained(
|
| 28 |
-
"
|
| 29 |
padding_side="right",
|
| 30 |
model_max_length=255,
|
| 31 |
trust_remote_code=True,
|
|
@@ -33,7 +33,7 @@ tokenizer = AutoTokenizer.from_pretrained(
|
|
| 33 |
|
| 34 |
# load MiniLM-L6-H384-uncased bert config
|
| 35 |
config = AutoConfig.from_pretrained(
|
| 36 |
-
"
|
| 37 |
trust_remote_code=True,
|
| 38 |
)
|
| 39 |
|
|
|
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
| 14 |
+
BertChunker is a text chunker based on BERT with a classifier head to predict the start token of chunks (for use in RAG, etc). It was finetuned on [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). The whole training lasted for 10 minutes on a Nvidia P40 GPU on a 50 MB synthetized dataset.
|
| 15 |
|
| 16 |
This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
|
| 17 |
|
|
|
|
| 25 |
|
| 26 |
# load bert tokenizer
|
| 27 |
tokenizer = AutoTokenizer.from_pretrained(
|
| 28 |
+
"nreimers/MiniLM-L6-H384-uncased",
|
| 29 |
padding_side="right",
|
| 30 |
model_max_length=255,
|
| 31 |
trust_remote_code=True,
|
|
|
|
| 33 |
|
| 34 |
# load MiniLM-L6-H384-uncased bert config
|
| 35 |
config = AutoConfig.from_pretrained(
|
| 36 |
+
"nreimers/MiniLM-L6-H384-uncased",
|
| 37 |
trust_remote_code=True,
|
| 38 |
)
|
| 39 |
|