nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use PavanNeerudu/gpt2-finetuned-stsb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="PavanNeerudu/gpt2-finetuned-stsb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("PavanNeerudu/gpt2-finetuned-stsb")
model = AutoModelForSequenceClassification.from_pretrained("PavanNeerudu/gpt2-finetuned-stsb")This model is GPT-2 fine-tuned on GLUE STS-B dataset. It acheives the following results on the validation set
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. However, it acheives very good results on Text Classification tasks.
The following hyperparameters were used during training:
| Epoch | Training Loss | Training PCC | Validation Loss | Validation PCC |
|---|---|---|---|---|
| 1 | 3.14066 | 0.09220 | 2.45140 | 0.11778 |
| 2 | 1.96428 | 0.30958 | 1.54366 | 0.58155 |
| 3 | 1.53877 | 0.53427 | 1.14102 | 0.71384 |
| 4 | 1.29935 | 0.62852 | 1.00576 | 0.74999 |