Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,6 @@ This model is based on [ModernBERT Japanese 130M](https://huggingface.co/sbintui
|
|
| 19 |
- **Superior Tokenization Efficiency**: Requires 24.0% fewer tokens per document compared to BERT Base
|
| 20 |
- **Faster Training**: Completes training 39% faster than BERT Base
|
| 21 |
- **Improved Processing Speed**: 1.65× faster during training and 1.66× faster during inference
|
| 22 |
-
- **Extended Sequence Length**: Supports sequences up to 8,192 tokens (vs. 512 in BERT)
|
| 23 |
- **Comparable Classification Performance**: Achieves similar or slightly better performance across most conditions
|
| 24 |
|
| 25 |
## How to Use
|
|
|
|
| 19 |
- **Superior Tokenization Efficiency**: Requires 24.0% fewer tokens per document compared to BERT Base
|
| 20 |
- **Faster Training**: Completes training 39% faster than BERT Base
|
| 21 |
- **Improved Processing Speed**: 1.65× faster during training and 1.66× faster during inference
|
|
|
|
| 22 |
- **Comparable Classification Performance**: Achieves similar or slightly better performance across most conditions
|
| 23 |
|
| 24 |
## How to Use
|