YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

LoRA adapter based on GradientAI's 1M context Llama-3 8B Instruct finetune. I found that rank 1024 is not sufficient to capture the delta weights in the q_proj and o_proj, so I've created seperate adapters for those modules vs the k-v projection modules.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support