Korean XLM-RoBERTa Classifier

이 λͺ¨λΈμ€ **xlm-roberta-base**λ₯Ό 기반으둜 νŒŒμΈνŠœλ‹λœ ν•œκ΅­μ–΄/μ˜μ–΄ 이쀑언어 ν…μŠ€νŠΈ λΆ„λ₯˜ λͺ¨λΈμž…λ‹ˆλ‹€.
총 66개 라벨 λΆ„λ₯˜κ°€ κ°€λŠ₯ν•˜λ©°, 라벨 μ •λ³΄λŠ” label_mapping.json νŒŒμΌμ—μ„œ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€.


πŸ“‚ Files in Repository

  • config.json: λͺ¨λΈ μ„€μ •
  • tokenizer.json / tokenizer_config.json: ν† ν¬λ‚˜μ΄μ €
  • special_tokens_map.json: 특수 토큰 λ§€ν•‘
  • pytorch_model.bin λ˜λŠ” model.safetensors (λ‘˜ 쀑 ν•˜λ‚˜λ§Œ μ‚¬μš©, safetensors ꢌμž₯)
  • label_mapping.json: 인덱슀 ↔ 라벨 λ§€ν•‘
  • classifier.pkl, label_embeddings.pkl: μΆ”κ°€ λΆ„λ₯˜κΈ° 및 μž„λ² λ”©
  • label_independence_analysis.py: 뢄석 슀크립트 (λΆ€κ°€ 자료)

πŸš€ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "Halfotter/home"   # Hugging Face repo 경둜

# ν† ν¬λ‚˜μ΄μ €μ™€ λͺ¨λΈ 뢈러였기
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# μž…λ ₯ μ˜ˆμ‹œ
inputs = tokenizer("ν…ŒμŠ€νŠΈ λ¬Έμž₯", return_tensors="pt")
outputs = model(**inputs)

# μ†Œν”„νŠΈλ§₯슀둜 ν™•λ₯  λ³€ν™˜
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
label_id = torch.argmax(probs).item()

print("Predicted Label ID:", label_id)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Halfotter/home

Finetuned
(3856)
this model