Text Classification
Transformers
Safetensors
English
Korean
modernbert
fill-mask
Eval Results (legacy)
text-embeddings-inference
Instructions to use skt/A.X-Encoder-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use skt/A.X-Encoder-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="skt/A.X-Encoder-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("skt/A.X-Encoder-base") model = AutoModelForMaskedLM.from_pretrained("skt/A.X-Encoder-base") - Notebooks
- Google Colab
- Kaggle
Set tokenizer "model_max_length" property to 16384
Browse filesThis information is different with the [README](https://huggingface.co/skt/A.X-Encoder-base#ax-encoder-highlights)
- tokenizer_config.json +1 -1
tokenizer_config.json
CHANGED
|
@@ -312,7 +312,7 @@
|
|
| 312 |
"eos_token": "<\\s>",
|
| 313 |
"extra_special_tokens": {},
|
| 314 |
"mask_token": "<mask>",
|
| 315 |
-
"model_max_length":
|
| 316 |
"pad_token": "<pad>",
|
| 317 |
"sep_token": "<sep>",
|
| 318 |
"strip_accents": null,
|
|
|
|
| 312 |
"eos_token": "<\\s>",
|
| 313 |
"extra_special_tokens": {},
|
| 314 |
"mask_token": "<mask>",
|
| 315 |
+
"model_max_length": 16384,
|
| 316 |
"pad_token": "<pad>",
|
| 317 |
"sep_token": "<sep>",
|
| 318 |
"strip_accents": null,
|