tim1900 commited on
Commit
5111de6
·
verified ·
1 Parent(s): b2d6e55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -5,13 +5,13 @@ language:
5
  - zh
6
  pipeline_tag: token-classification
7
  ---
8
- # BertChunker
9
 
10
  [Paper](https://github.com/jackfsuia/BertChunker/blob/main/main.pdf) | [Github](https://github.com/jackfsuia/BertChunker)
11
 
12
  ## Introduction
13
 
14
- BertChunker is a text chunker based on BERT with a classifier head to predict the start token of chunks (for use in RAG, etc), and using a sliding window it cuts documents of any size into chunks. It was finetuned on [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). The whole training lasted for 10 minutes on a Nvidia P40 GPU on a 50 MB synthetized dataset.
15
 
16
  This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
17
 
@@ -25,7 +25,7 @@ from modeling_bertchunker import BertChunker
25
 
26
  # load bert tokenizer
27
  tokenizer = AutoTokenizer.from_pretrained(
28
- "tim1900/BertChunker",
29
  padding_side="right",
30
  model_max_length=255,
31
  trust_remote_code=True,
@@ -33,7 +33,7 @@ tokenizer = AutoTokenizer.from_pretrained(
33
 
34
  # load MiniLM-L6-H384-uncased bert config
35
  config = AutoConfig.from_pretrained(
36
- "tim1900/BertChunker",
37
  trust_remote_code=True,
38
  )
39
 
 
5
  - zh
6
  pipeline_tag: token-classification
7
  ---
8
+ # bert-chunker
9
 
10
  [Paper](https://github.com/jackfsuia/BertChunker/blob/main/main.pdf) | [Github](https://github.com/jackfsuia/BertChunker)
11
 
12
  ## Introduction
13
 
14
+ bert-chunker is a text chunker based on BERT with a classifier head to predict the start token of chunks (for use in RAG, etc), and using a sliding window it cuts documents of any size into chunks. It was finetuned on [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). The whole training lasted for 10 minutes on a Nvidia P40 GPU on a 50 MB synthetized dataset.
15
 
16
  This repo includes model checkpoint, BertChunker class definition file and all the other files needed.
17
 
 
25
 
26
  # load bert tokenizer
27
  tokenizer = AutoTokenizer.from_pretrained(
28
+ "tim1900/bert-chunker",
29
  padding_side="right",
30
  model_max_length=255,
31
  trust_remote_code=True,
 
33
 
34
  # load MiniLM-L6-H384-uncased bert config
35
  config = AutoConfig.from_pretrained(
36
+ "tim1900/bert-chunker",
37
  trust_remote_code=True,
38
  )
39