rajpurkar/squad
Viewer • Updated • 98.2k • 148k • 363
How to use Tural/How_to_fine-tune_a_model_for_common_downstream_tasks_V2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="Tural/How_to_fine-tune_a_model_for_common_downstream_tasks_V2") # Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Tural/How_to_fine-tune_a_model_for_common_downstream_tasks_V2")
model = AutoModelForQuestionAnswering.from_pretrained("Tural/How_to_fine-tune_a_model_for_common_downstream_tasks_V2")This model is a fine-tuned version of Tural/language-modeling-from-scratch on the squad dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 3.647 | 1.0 | 3650 | 3.6697 |
| 3.4239 | 2.0 | 7300 | 3.4835 |
| 3.2087 | 3.0 | 10950 | 3.4298 |
Base model
Tural/language-modeling-from-scratch