AbstractPhil commited on
Commit
e06f060
Β·
verified Β·
1 Parent(s): a5022f0

Update README - Run 20251012_065325

Browse files
Files changed (1) hide show
  1. README.md +18 -21
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  metrics:
13
  - accuracy
14
  model-index:
15
- - name: David-partial_shared-hierarchical_tree
16
  results:
17
  - task:
18
  type: image-classification
@@ -21,7 +21,7 @@ model-index:
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
- value: 75.41
25
  ---
26
 
27
  # David: Multi-Scale Crystal Classifier
@@ -32,17 +32,17 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
32
  ## Model Details
33
 
34
  ### Architecture
35
- - **Preset**: balanced
36
  - **Sharing Mode**: partial_shared
37
- - **Fusion Mode**: hierarchical_tree
38
- - **Scales**: [256, 512, 768, 1024]
39
- - **Feature Dim**: 512
40
  - **Parameters**: ~8.8M
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
- - **Model Variant**: clip_vit_laion_b32
45
- - **Epochs**: 20
46
  - **Batch Size**: 1024
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
@@ -51,15 +51,12 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
51
  ## Performance
52
 
53
  ### Best Results
54
- - **Validation Accuracy**: 75.41%
55
- - **Best Epoch**: 9
56
- - **Final Train Accuracy**: 87.91%
57
 
58
  ### Per-Scale Performance
59
- - **Scale 256**: 74.79%
60
- - **Scale 512**: 75.39%
61
- - **Scale 768**: 75.40%
62
- - **Scale 1024**: 73.42%
63
 
64
 
65
  ## Usage
@@ -69,7 +66,7 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
69
  ```
70
  AbstractPhil/gated-david/
71
  β”œβ”€β”€ weights/
72
- β”‚ └── david_balanced/
73
  β”‚ └── 20251012_065325/
74
  β”‚ β”œβ”€β”€ best_model.safetensors
75
  β”‚ β”œβ”€β”€ best_model_metadata.json
@@ -78,7 +75,7 @@ AbstractPhil/gated-david/
78
  β”‚ β”œβ”€β”€ david_config.json
79
  β”‚ └── train_config.json
80
  β”œβ”€β”€ runs/
81
- β”‚ └── david_balanced/
82
  β”‚ └── 20251012_065325/
83
  β”‚ └── events.out.tfevents.*
84
  β”œβ”€β”€ README.md
@@ -92,7 +89,7 @@ from geovocab2.train.model.core.david import David, DavidArchitectureConfig
92
  from huggingface_hub import hf_hub_download
93
 
94
  # Specify model variant and run
95
- model_name = "david_balanced"
96
  run_id = "20251012_065325"
97
 
98
  # Download config
@@ -136,7 +133,7 @@ with torch.no_grad():
136
  ## Architecture Overview
137
 
138
  ### Multi-Scale Processing
139
- David processes inputs at multiple scales (256, 512, 768, 1024),
140
  allowing it to capture both coarse and fine-grained features.
141
 
142
  ### Crystal Geometry
@@ -154,7 +151,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
154
  ```
155
 
156
  ### Fusion Strategy
157
- **hierarchical_tree**: Intelligently combines predictions from multiple scales.
158
 
159
  ## Training Details
160
 
@@ -193,4 +190,4 @@ Special thanks to Claude (Anthropic) for debugging assistance.
193
 
194
  ---
195
 
196
- *Generated on 2025-10-12 07:35:40*
 
12
  metrics:
13
  - accuracy
14
  model-index:
15
+ - name: David-partial_shared-deep_efficiency
16
  results:
17
  - task:
18
  type: image-classification
 
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
+ value: 81.16
25
  ---
26
 
27
  # David: Multi-Scale Crystal Classifier
 
32
  ## Model Details
33
 
34
  ### Architecture
35
+ - **Preset**: clip_vit_l14_ultra_deep
36
  - **Sharing Mode**: partial_shared
37
+ - **Fusion Mode**: deep_efficiency
38
+ - **Scales**: [256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304, 2560]
39
+ - **Feature Dim**: 768
40
  - **Parameters**: ~8.8M
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
+ - **Model Variant**: clip_vit_l14
45
+ - **Epochs**: 10
46
  - **Batch Size**: 1024
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
 
51
  ## Performance
52
 
53
  ### Best Results
54
+ - **Validation Accuracy**: 81.16%
55
+ - **Best Epoch**: 0
56
+ - **Final Train Accuracy**: 78.10%
57
 
58
  ### Per-Scale Performance
59
+ - **Scale 256**: 81.16%
 
 
 
60
 
61
 
62
  ## Usage
 
66
  ```
67
  AbstractPhil/gated-david/
68
  β”œβ”€β”€ weights/
69
+ β”‚ └── david_clip_vit_l14_ultra_deep/
70
  β”‚ └── 20251012_065325/
71
  β”‚ β”œβ”€β”€ best_model.safetensors
72
  β”‚ β”œβ”€β”€ best_model_metadata.json
 
75
  β”‚ β”œβ”€β”€ david_config.json
76
  β”‚ └── train_config.json
77
  β”œβ”€β”€ runs/
78
+ β”‚ └── david_clip_vit_l14_ultra_deep/
79
  β”‚ └── 20251012_065325/
80
  β”‚ └── events.out.tfevents.*
81
  β”œβ”€β”€ README.md
 
89
  from huggingface_hub import hf_hub_download
90
 
91
  # Specify model variant and run
92
+ model_name = "david_clip_vit_l14_ultra_deep"
93
  run_id = "20251012_065325"
94
 
95
  # Download config
 
133
  ## Architecture Overview
134
 
135
  ### Multi-Scale Processing
136
+ David processes inputs at multiple scales (256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304, 2560),
137
  allowing it to capture both coarse and fine-grained features.
138
 
139
  ### Crystal Geometry
 
151
  ```
152
 
153
  ### Fusion Strategy
154
+ **deep_efficiency**: Intelligently combines predictions from multiple scales.
155
 
156
  ## Training Details
157
 
 
190
 
191
  ---
192
 
193
+ *Generated on 2025-10-12 07:38:25*