--- base_model: meta-llama/Llama-2-7b-hf tags: - LoRA - bittensor - gradients license: apache-2.0 --- # Submission for task `submission_instruct_002` 🧠 Fine-tuned using LoRA on a dynamic dataset generated from LLaMA. - Task ID: `sim-instruct-002` - Repo: `submission_instruct_002` - Loss: `2.4597714705900713` - Timestamp: 2025-07-09T08:40:39.166222