Korean XLM-RoBERTa Classifier (HF Compatible)
This model is converted from a custom classifier to be compatible with Hugging Face Inference API.
Model Info
- Base Model: xlm-roberta-base
- Task: text-classification
- Language: Korean
- Labels: 66
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load directly from Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained("Halftotter/korean-xlm-roberta-classifier")
model = AutoModelForSequenceClassification.from_pretrained("Halftotter/korean-xlm-roberta-classifier")
# Predict
inputs = tokenizer("원본 투입물명", return_tensors="pt")
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
print(predictions)
- Downloads last month
- -
Model tree for Halfotter/korean-xlm-roberta-classifier
Base model
FacebookAI/xlm-roberta-base