AbstractPhil commited on
Commit
030c272
Β·
verified Β·
1 Parent(s): 55ca068

Update README - Run 20251012_132646

Browse files
Files changed (1) hide show
  1. README.md +24 -31
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  metrics:
13
  - accuracy
14
  model-index:
15
- - name: David-partial_shared-deep_efficiency
16
  results:
17
  - task:
18
  type: image-classification
@@ -21,7 +21,7 @@ model-index:
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
- value: 83.10
25
  ---
26
 
27
  # David: Multi-Scale Crystal Classifier
@@ -32,17 +32,17 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
32
  ## Model Details
33
 
34
  ### Architecture
35
- - **Preset**: clip_vit_l14_ultra_deep
36
- - **Sharing Mode**: partial_shared
37
- - **Fusion Mode**: deep_efficiency
38
- - **Scales**: [256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304, 2560]
39
- - **Feature Dim**: 768
40
  - **Parameters**: ~8.8M
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
- - **Model Variant**: clip_vit_l14
45
- - **Epochs**: 10
46
  - **Batch Size**: 1024
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
@@ -51,20 +51,13 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
51
  ## Performance
52
 
53
  ### Best Results
54
- - **Validation Accuracy**: 83.10%
55
- - **Best Epoch**: 8
56
- - **Final Train Accuracy**: 94.52%
57
 
58
  ### Per-Scale Performance
59
- - **Scale 256**: 83.10%
60
- - **Scale 512**: 83.24%
61
- - **Scale 768**: 83.06%
62
- - **Scale 1024**: 83.03%
63
- - **Scale 1280**: 82.92%
64
- - **Scale 1536**: 82.95%
65
- - **Scale 1792**: 82.98%
66
- - **Scale 2048**: 82.97%
67
- - **Scale 2304**: 82.89%
68
 
69
 
70
  ## Usage
@@ -74,8 +67,8 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
74
  ```
75
  AbstractPhil/gated-david/
76
  β”œβ”€β”€ weights/
77
- β”‚ └── david_clip_vit_l14_ultra_deep/
78
- β”‚ └── 20251012_065325/
79
  β”‚ β”œβ”€β”€ best_model.safetensors
80
  β”‚ β”œβ”€β”€ best_model_metadata.json
81
  β”‚ β”œβ”€β”€ final_model.safetensors
@@ -83,8 +76,8 @@ AbstractPhil/gated-david/
83
  β”‚ β”œβ”€β”€ david_config.json
84
  β”‚ └── train_config.json
85
  β”œβ”€β”€ runs/
86
- β”‚ └── david_clip_vit_l14_ultra_deep/
87
- β”‚ └── 20251012_065325/
88
  β”‚ └── events.out.tfevents.*
89
  β”œβ”€β”€ README.md
90
  └── best_model.json
@@ -97,8 +90,8 @@ from geovocab2.train.model.core.david import David, DavidArchitectureConfig
97
  from huggingface_hub import hf_hub_download
98
 
99
  # Specify model variant and run
100
- model_name = "david_clip_vit_l14_ultra_deep"
101
- run_id = "20251012_065325"
102
 
103
  # Download config
104
  config_path = hf_hub_download(
@@ -141,7 +134,7 @@ with torch.no_grad():
141
  ## Architecture Overview
142
 
143
  ### Multi-Scale Processing
144
- David processes inputs at multiple scales (256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304, 2560),
145
  allowing it to capture both coarse and fine-grained features.
146
 
147
  ### Crystal Geometry
@@ -159,7 +152,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
159
  ```
160
 
161
  ### Fusion Strategy
162
- **deep_efficiency**: Intelligently combines predictions from multiple scales.
163
 
164
  ## Training Details
165
 
@@ -183,7 +176,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
183
  author = {AbstractPhil},
184
  year = {2025},
185
  url = {https://huggingface.co/AbstractPhil/gated-david},
186
- note = {Run ID: 20251012_065325}
187
  }
188
  ```
189
 
@@ -198,4 +191,4 @@ Special thanks to Claude (Anthropic) for debugging assistance.
198
 
199
  ---
200
 
201
- *Generated on 2025-10-12 08:02:12*
 
12
  metrics:
13
  - accuracy
14
  model-index:
15
+ - name: David-fully_shared-weighted_sum
16
  results:
17
  - task:
18
  type: image-classification
 
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
+ value: 68.12
25
  ---
26
 
27
  # David: Multi-Scale Crystal Classifier
 
32
  ## Model Details
33
 
34
  ### Architecture
35
+ - **Preset**: small_fast
36
+ - **Sharing Mode**: fully_shared
37
+ - **Fusion Mode**: weighted_sum
38
+ - **Scales**: [256, 512]
39
+ - **Feature Dim**: 512
40
  - **Parameters**: ~8.8M
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
+ - **Model Variant**: clip_vit_laion_b32
45
+ - **Epochs**: 20
46
  - **Batch Size**: 1024
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
 
51
  ## Performance
52
 
53
  ### Best Results
54
+ - **Validation Accuracy**: 68.12%
55
+ - **Best Epoch**: 0
56
+ - **Final Train Accuracy**: 63.38%
57
 
58
  ### Per-Scale Performance
59
+ - **Scale 256**: 67.00%
60
+ - **Scale 512**: 67.96%
 
 
 
 
 
 
 
61
 
62
 
63
  ## Usage
 
67
  ```
68
  AbstractPhil/gated-david/
69
  β”œβ”€β”€ weights/
70
+ β”‚ └── david_small_fast/
71
+ β”‚ └── 20251012_132646/
72
  β”‚ β”œβ”€β”€ best_model.safetensors
73
  β”‚ β”œβ”€β”€ best_model_metadata.json
74
  β”‚ β”œβ”€β”€ final_model.safetensors
 
76
  β”‚ β”œβ”€β”€ david_config.json
77
  β”‚ └── train_config.json
78
  β”œβ”€β”€ runs/
79
+ β”‚ └── david_small_fast/
80
+ β”‚ └── 20251012_132646/
81
  β”‚ └── events.out.tfevents.*
82
  β”œβ”€β”€ README.md
83
  └── best_model.json
 
90
  from huggingface_hub import hf_hub_download
91
 
92
  # Specify model variant and run
93
+ model_name = "david_small_fast"
94
+ run_id = "20251012_132646"
95
 
96
  # Download config
97
  config_path = hf_hub_download(
 
134
  ## Architecture Overview
135
 
136
  ### Multi-Scale Processing
137
+ David processes inputs at multiple scales (256, 512),
138
  allowing it to capture both coarse and fine-grained features.
139
 
140
  ### Crystal Geometry
 
152
  ```
153
 
154
  ### Fusion Strategy
155
+ **weighted_sum**: Intelligently combines predictions from multiple scales.
156
 
157
  ## Training Details
158
 
 
176
  author = {AbstractPhil},
177
  year = {2025},
178
  url = {https://huggingface.co/AbstractPhil/gated-david},
179
+ note = {Run ID: 20251012_132646}
180
  }
181
  ```
182
 
 
191
 
192
  ---
193
 
194
+ *Generated on 2025-10-12 13:30:05*