tblard/allocine
Viewer • Updated • 200k • 1.56k • 18
How to use alosof/camembert-sentiment-allocine with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="alosof/camembert-sentiment-allocine") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("alosof/camembert-sentiment-allocine")
model = AutoModelForSequenceClassification.from_pretrained("alosof/camembert-sentiment-allocine")This model is a fine-tuned version of camembert-base on the allocine dataset.
This model has been trained for a single epoch for testing purposes.
This model has been created by fine-tuning the TensorFlow version camembert-base after freezing the encoder part:
model.roberta.trainable = False
Therefore, only the classifier head parameters have been updated during training.
The following hyperparameters were used during training:
- optimizer: {
'name': 'Adam',
'learning_rate': {
'class_name': 'PolynomialDecay',
'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}
},
'decay': 0.0,
'beta_1': 0.9,
'beta_2': 0.999,
'epsilon': 1e-07,
'amsgrad': False
}
- training_precision: float32
- epochs: 1
The model achieves the following results on the test set:
| Accuracy |
|---|
| 0.918 |