Instructions to use BM-K/KoMiniLM-68M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BM-K/KoMiniLM-68M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="BM-K/KoMiniLM-68M")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM-68M") model = AutoModelForSequenceClassification.from_pretrained("BM-K/KoMiniLM-68M") - Notebooks
- Google Colab
- Kaggle
Commit History
add tokenizer 158c272
add model 606f60f
Update tokenizer_config.json baf8280
update toknenizer 4575ed8
BM-K commited on
update tokenizer config 00729c3
BM-K commited on
upload tokenizer.json 2412c4e
BM-K commited on