tim1900 commited on
Commit
6610bc8
·
verified ·
1 Parent(s): 8af7130

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: token-classification
11
 
12
  bert-chunker-3 is a text chunker based on BertForTokenClassification to predict the start token of chunks (for use in RAG, etc), and using a sliding window it cuts documents of any size into chunks. We see it as an alternative of [semantic chunker](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb), but specially, it not only works for the structured texts, but also the **unstructured and messy texts**.
13
 
14
- Different from [bc-2](https://huggingface.co/tim1900/bert-chunker-2) and [bc](https://huggingface.co/tim1900/bert-chunker), to overcome the data distribution shift, our training data were labeled by a LLM and trainng pipeline was improved, therefore it is **more stable**.
15
 
16
  Updates :
17
  - 2025.5.12: an experimental script that **supports specifying the maximum tokens per chunk** is available now [below](#experimental), evaluation is at [**evaluation**](#evaluation).
 
11
 
12
  bert-chunker-3 is a text chunker based on BertForTokenClassification to predict the start token of chunks (for use in RAG, etc), and using a sliding window it cuts documents of any size into chunks. We see it as an alternative of [semantic chunker](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb), but specially, it not only works for the structured texts, but also the **unstructured and messy texts**.
13
 
14
+ Different from [bc-2](https://huggingface.co/tim1900/bert-chunker-2) and [bc](https://huggingface.co/tim1900/bert-chunker), to overcome the data distribution shift, our training data were labeled by a LLM and trainng pipeline was improved, therefore it is **more stable**, and has a **competitive** [**performance**](#evaluation).
15
 
16
  Updates :
17
  - 2025.5.12: an experimental script that **supports specifying the maximum tokens per chunk** is available now [below](#experimental), evaluation is at [**evaluation**](#evaluation).