euhidaman commited on
Commit
c7a0fe2
Β·
verified Β·
1 Parent(s): 0ac1cf0

Update model - STAGE1 Epoch 1 | Loss: 5.6777

Browse files
Files changed (3) hide show
  1. README.md +11 -10
  2. pytorch_model.bin +2 -2
  3. training_info.json +7 -6
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - tiny-vlm
11
  - repvit
12
  - tinyllm
13
- - stage2
14
  base_model:
15
  - tinyllm
16
  library_name: transformers
@@ -21,23 +21,24 @@ pipeline_tag: image-text-to-text
21
 
22
  **πŸ”₯ Efficient Vision-Language Model for Edge Deployment & Robotic Applications**
23
 
24
- This model is currently in training - **STAGE2 (Epoch 1)**.
25
 
26
  ## πŸ“Š Current Training Status
27
 
28
- - **Stage**: Multimodal Instruction Tuning - Following complex instructions
29
  - **Epoch**: 1
30
- - **Last Updated**: 2026-01-28 15:04:17 UTC
31
 
32
  ### Latest Metrics
33
- - **instruction_loss**: 0.0000
34
- - **loss**: 5.1544
 
35
 
36
  ## πŸ—οΈ Model Architecture
37
 
38
  - **Size**: Tiny (~35M parameters)
39
- - **Total Parameters**: 37,237,665
40
- - **Trainable Parameters**: 23,254,337 (62.4%)
41
  - **Vision Encoder**: RepViT-M0.9 (~5M params)
42
  - **Language Model**: TinyLLM-30M (30M params)
43
 
@@ -50,7 +51,7 @@ EmberVLM follows a 4-stage training curriculum:
50
  3. βœ… **Stage 3: Robot Fleet Selection** - Task-robot matching
51
  4. ⏳ **Stage 4: Chain-of-Thought Reasoning** - Reasoning generation
52
 
53
- **Current Stage**: STAGE2
54
 
55
  ## πŸ’» Usage
56
 
@@ -125,5 +126,5 @@ Apache 2.0
125
 
126
  ---
127
 
128
- **Note**: This is a checkpoint from stage2 training (epoch 1).
129
  The model will be updated after each epoch with improved performance.
 
10
  - tiny-vlm
11
  - repvit
12
  - tinyllm
13
+ - stage1
14
  base_model:
15
  - tinyllm
16
  library_name: transformers
 
21
 
22
  **πŸ”₯ Efficient Vision-Language Model for Edge Deployment & Robotic Applications**
23
 
24
+ This model is currently in training - **STAGE1 (Epoch 1)**.
25
 
26
  ## πŸ“Š Current Training Status
27
 
28
+ - **Stage**: Visual-Language Alignment - Learning to ground vision and language
29
  - **Epoch**: 1
30
+ - **Last Updated**: 2026-02-01 16:00:11 UTC
31
 
32
  ### Latest Metrics
33
+ - **captioning_loss**: 8.5561
34
+ - **contrastive_loss**: 2.7994
35
+ - **loss**: 5.6777
36
 
37
  ## πŸ—οΈ Model Architecture
38
 
39
  - **Size**: Tiny (~35M parameters)
40
+ - **Total Parameters**: 40,196,257
41
+ - **Trainable Parameters**: 26,212,929 (65.2%)
42
  - **Vision Encoder**: RepViT-M0.9 (~5M params)
43
  - **Language Model**: TinyLLM-30M (30M params)
44
 
 
51
  3. βœ… **Stage 3: Robot Fleet Selection** - Task-robot matching
52
  4. ⏳ **Stage 4: Chain-of-Thought Reasoning** - Reasoning generation
53
 
54
+ **Current Stage**: STAGE1
55
 
56
  ## πŸ’» Usage
57
 
 
126
 
127
  ---
128
 
129
+ **Note**: This is a checkpoint from stage1 training (epoch 1).
130
  The model will be updated after each epoch with improved performance.
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed94af8def51ab039a8be350aa1df789d1b6f2a3b10d54e42fd0d01f22d1ec6b
3
- size 88817547
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6be11d39bd7c475a6e51883249a0a9ba175c11618e424678674cb2ef649fe66
3
+ size 100663623
training_info.json CHANGED
@@ -1,14 +1,15 @@
1
  {
2
- "stage": "stage2",
3
  "epoch": 1,
4
  "metrics": {
5
- "loss": 5.154378942541174,
6
- "instruction_loss": 0.0
 
7
  },
8
  "carbon_emissions_kg": 0.0,
9
- "timestamp": "2026-01-28T15:04:17.887928",
10
  "vision_backbone": "repvit",
11
  "language_backbone": "tinyllm",
12
- "total_parameters": 37237665,
13
- "trainable_parameters": 23254337
14
  }
 
1
  {
2
+ "stage": "stage1",
3
  "epoch": 1,
4
  "metrics": {
5
+ "loss": 5.6777140368586005,
6
+ "contrastive_loss": 2.7993588654891304,
7
+ "captioning_loss": 8.556068959443465
8
  },
9
  "carbon_emissions_kg": 0.0,
10
+ "timestamp": "2026-02-01T16:00:11.852746",
11
  "vision_backbone": "repvit",
12
  "language_backbone": "tinyllm",
13
+ "total_parameters": 40196257,
14
+ "trainable_parameters": 26212929
15
  }