TinyLlama for Abalone RAG

This is a toy model fine-tuned specifically for the Abalone RAG App. A dataset was developed to finetune this model:

  • Questions were suggested by GPT-5.
  • Passages were retrieved from a small index created for the app.
  • Responses were generated by prompting GPT-5 with the query, the retrieved context, and instructions to produce concise answers referencing the context.

TinyLlama was finetuned on this dataset, resulting in a dedicated TinyLlama model for Abalone RAG. This model is designed to provide concise, context-aware answers about abalone for use in the RAG application.

These are the hyperparamters that were used, and they were not optimized:

  • Epochs: 3
  • Batch Size: 2
  • Gradient Accumulation Steps: 8
  • Learning Rate: 2e-4
Downloads last month
4
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using LoneWolfgang/tinyllama-for-abalone-RAG 1