euhidaman commited on
Commit
58df483
Β·
verified Β·
1 Parent(s): a25f247

Update model - STAGE3 Epoch 1 | Loss: 0.0000 | Acc: 9.71%

Browse files
Files changed (3) hide show
  1. README.md +14 -8
  2. pytorch_model.bin +1 -1
  3. training_info.json +10 -4
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - tiny-vlm
11
  - dinov2_small
12
  - smollm_135m
13
- - stage2
14
  base_model:
15
  - smollm_135m
16
  library_name: transformers
@@ -21,17 +21,23 @@ pipeline_tag: image-text-to-text
21
 
22
  **πŸ”₯ Efficient Vision-Language Model for Edge Deployment & Robotic Applications**
23
 
24
- This model is currently in training - **STAGE2 (Epoch 1)**.
25
 
26
  ## πŸ“Š Current Training Status
27
 
28
- - **Stage**: Multimodal Instruction Tuning - Following complex instructions
29
  - **Epoch**: 1
30
- - **Last Updated**: 2026-02-20 09:13:46 UTC
31
 
32
  ### Latest Metrics
33
- - **instruction_loss**: 0.0000
34
- - **loss**: 6.4789
 
 
 
 
 
 
35
 
36
  ## πŸ—οΈ Model Architecture
37
 
@@ -50,7 +56,7 @@ EmberVLM follows a 4-stage training curriculum:
50
  3. βœ… **Stage 3: Robot Fleet Selection** - Task-robot matching
51
  4. ⏳ **Stage 4: Chain-of-Thought Reasoning** - Reasoning generation
52
 
53
- **Current Stage**: STAGE2
54
 
55
  ## πŸ’» Usage
56
 
@@ -125,5 +131,5 @@ Apache 2.0
125
 
126
  ---
127
 
128
- **Note**: This is a checkpoint from stage2 training (epoch 1).
129
  The model will be updated after each epoch with improved performance.
 
10
  - tiny-vlm
11
  - dinov2_small
12
  - smollm_135m
13
+ - stage3
14
  base_model:
15
  - smollm_135m
16
  library_name: transformers
 
21
 
22
  **πŸ”₯ Efficient Vision-Language Model for Edge Deployment & Robotic Applications**
23
 
24
+ This model is currently in training - **STAGE3 (Epoch 1)**.
25
 
26
  ## πŸ“Š Current Training Status
27
 
28
+ - **Stage**: Robot Fleet Selection - Choosing optimal robots for tasks
29
  - **Epoch**: 1
30
+ - **Last Updated**: 2026-02-20 09:19:56 UTC
31
 
32
  ### Latest Metrics
33
+ - **Drone_f1**: 0.0000
34
+ - **Humanoid_f1**: 0.0000
35
+ - **Robot_with_Legs_f1**: 0.0000
36
+ - **Robot_with_Wheels_f1**: 0.1770
37
+ - **Underwater_Robot_f1**: 0.0000
38
+ - **accuracy**: 0.0971
39
+ - **loss**: 0.0000
40
+ - **macro_f1**: 0.0354
41
 
42
  ## πŸ—οΈ Model Architecture
43
 
 
56
  3. βœ… **Stage 3: Robot Fleet Selection** - Task-robot matching
57
  4. ⏳ **Stage 4: Chain-of-Thought Reasoning** - Reasoning generation
58
 
59
+ **Current Stage**: STAGE3
60
 
61
  ## πŸ’» Usage
62
 
 
131
 
132
  ---
133
 
134
+ **Note**: This is a checkpoint from stage3 training (epoch 1).
135
  The model will be updated after each epoch with improved performance.
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:30a7b733da85c9aa85f96ed70577a4a51ac3a4ee8d5c242a9502aaa2722ccc91
3
  size 388491879
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:871caca93b5cdba82011a19634d37b3ce537e11814a0d0331b1541a3fe0e4379
3
  size 388491879
training_info.json CHANGED
@@ -1,12 +1,18 @@
1
  {
2
- "stage": "stage2",
3
  "epoch": 1,
4
  "metrics": {
5
- "loss": 6.4789011137826105,
6
- "instruction_loss": 0.0
 
 
 
 
 
 
7
  },
8
  "carbon_emissions_kg": 0.0,
9
- "timestamp": "2026-02-20T09:13:46.607442",
10
  "vision_backbone": "dinov2_small",
11
  "language_backbone": "smollm_135m",
12
  "total_parameters": 164316833,
 
1
  {
2
+ "stage": "stage3",
3
  "epoch": 1,
4
  "metrics": {
5
+ "loss": 0.0,
6
+ "accuracy": 0.0970873786407767,
7
+ "Drone_f1": 0.0,
8
+ "Underwater_Robot_f1": 0.0,
9
+ "Humanoid_f1": 0.0,
10
+ "Robot_with_Wheels_f1": 0.17699115044247787,
11
+ "Robot_with_Legs_f1": 0.0,
12
+ "macro_f1": 0.035398230088495575
13
  },
14
  "carbon_emissions_kg": 0.0,
15
+ "timestamp": "2026-02-20T09:19:56.476259",
16
  "vision_backbone": "dinov2_small",
17
  "language_backbone": "smollm_135m",
18
  "total_parameters": 164316833,