metadata
language:
- en
metrics:
- loss: 0.6
base_model:
- meta-llama/Llama-3.1-8B-Instruct
data:
- >-
stored within this directory (respectively training_set and validation_set
for training and validation)
notes:
- Howdy! This is Ian Prazak's fine-tuned LLaMa-8B Instruct model.
- >-
The model was trained over just one epoch (work in progress) with a dataset
of approximately 30,000 formatted examples in JSONL.
- The base LLaMa tokenizer was used.
- The current model is llama-fact-full-test.nemo
- >-
If the model only runs in its base state, please load it with the
llama-fact.ckpt to incorporate the updated weights.
- The metrics listed above represent the current validation loss and accuracy
- Thank you for checking it out! :)