raniero commited on
Commit
0afa3be
·
verified ·
1 Parent(s): bf86ac4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-2-7b-hf
3
+ tags:
4
+ - LoRA
5
+ - bittensor
6
+ - gradients
7
+ license: apache-2.0
8
+ ---
9
+
10
+ # Submission for task `submission_test_speed_check`
11
+
12
+ - Task ID: `sim-task-readme-speed`
13
+ - Repo: `submission_test_speed_check`
14
+ - Loss: `4.868357499440511`
15
+ - Timestamp: 2025-06-30T23:56:20.301535
16
+
17
+ ---
18
+
19
+ ### LoRA Parameters
20
+ - Rank: 8
21
+ - Alpha: 16
22
+ - Dropout: 0.1
23
+ - Epochs: 1
24
+ - Learning rate: 1e-4
25
+ - Batch size: 1
26
+
27
+ ### Example
28
+ **Instruction:**
29
+ Come si calcola la derivata di \( x^2 \)?
30
+
31
+ **Response:**
32
+ La derivata di \( x^2 \) rispetto a \( x \) è \( 2x \).
33
+
34
+ ### Training Info
35
+ This LoRA adapter was trained using a dataset dynamically generated from RAG (FAISS index) based on the task theme.
36
+ The dataset consisted of 35 prompt-response examples related to mathematics and programming.
37
+
38
+ The adapter was trained on CPU in ~7 minutes and uploaded automatically via Hugging Face API.
39
+
40
+ ---
41
+
42
+ ### Training Curve
43
+ ![Loss Curve](loss_curve.png)
44
+
45
+ ### License and Usage
46
+ This adapter inherits the license and intended usage of the base model Meta LLaMA 2 (Meta AI License),
47
+ and is provided here for research purposes only as part of the Bittensor Subnet 56 participation.