Syn Youtube Data-Aug
Collection
3 items • Updated
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("layai/syn-dataaug-youtube-context")
model = AutoModelForCausalLM.from_pretrained("layai/syn-dataaug-youtube-context")This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.7716 | 0.6479 | 500 | 3.7005 | 0.4548 |
| 0.3675 | 1.2958 | 1000 | 3.8487 | 0.4565 |
| 0.2102 | 1.9436 | 1500 | 3.9705 | 0.4568 |
| 0.1164 | 2.5915 | 2000 | 4.1546 | 0.4570 |
Base model
meta-llama/Meta-Llama-3-8B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="layai/syn-dataaug-youtube-context")