--- base_model: meta-llama/Llama-2-7b-hf tags: - LoRA - bittensor - gradients license: apache-2.0 --- # Submission for task `submission_test_instruct_001` 🧠 Fine-tuned using LoRA on a dynamic dataset generated from LLaMA. - Task ID: `sim-task-instruct-test-001` - Repo: `submission_test_instruct_001` - Loss: `4.868910153706868` - Timestamp: 2025-07-07T15:33:38.120714