DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper
β’
1910.01108
β’
Published
β’
21
μ΄ λͺ¨λΈμ μν 리뷰μ λν κ°μ± λΆμ(Sentiment Analysis)μ μνν©λλ€. HuggingFace Transformers λΌμ΄λΈλ¬λ¦¬μ DistilBERT λͺ¨λΈμ κΈ°λ°μΌλ‘ IMDb λ°μ΄ν°μ μμ νμ΅λμμ΅λλ€.
This model performs sentiment analysis on movie reviews. Based on DistilBERT from HuggingFace Transformers, fine-tuned on the IMDb dataset.
from transformers import pipeline
# νμ΄νλΌμΈ μμ±
classifier = pipeline("sentiment-analysis", model="aiegoo/pytorch-book")
# κ°μ± λΆμ μν
result = classifier("This movie is amazing!")
print(result)
# [{'label': 'POSITIVE', 'score': 0.9998}]
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("aiegoo/pytorch-book")
model = AutoModelForSequenceClassification.from_pretrained("aiegoo/pytorch-book")
# ν ν°ν λ° μμΈ‘
inputs = tokenizer("I love this movie!", return_tensors="pt")
outputs = model(**inputs)
distilbert-base-uncased-finetuned-sst-2-englishμ΄ λͺ¨λΈμ PyTorch Book νμ΅ κ³Όμ μ μΌλΆλ‘ μμ±λμμ΅λλ€:
This model was created as part of the PyTorch Book learning curriculum.
MIT License - μμ λ‘κ² μ¬μ© κ°λ₯ν©λλ€.
Created with β€οΈ for learning PyTorch and Transformers