--- base_model: meta-llama/Llama-2-7b-hf tags: - LoRA - bittensor - gradients license: apache-2.0 --- # Submission for task `submission_test_parallel_math` 🧠 Fine-tuned using LoRA on a dynamic dataset generated from RAG. - Task ID: `task-parallel-001` - Repo: `submission_test_parallel_math` - Loss: `4.867742379506429` - Timestamp: 2025-06-30T00:32:08.414966