Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
Fine-tune for one epoch
Result:
After the pre-training: Invalid output length: 7, Testing length: 1319 , Accuracy: 0.580
Comparison
The official report accuracy is 0.665 by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
Note: The LoRA adapter is being used for future research purposes.
π Adapter Usage
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/metamath_lora_qkv_llama2_7b")
model.enable_adapters()
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Evaluation results
- Accuracy (zero-shot) on meta-math/MetaMathQAArithmetic Reasoning on GSM8K0.580