AbstractPhil commited on
Commit
af30264
Β·
verified Β·
1 Parent(s): 88d920d

Update README - Run 20251104_151233

Browse files
Files changed (1) hide show
  1. README.md +29 -32
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  metrics:
13
  - accuracy
14
  model-index:
15
- - name: David-decoupled-cantor_scale
16
  results:
17
  - task:
18
  type: image-classification
@@ -21,7 +21,7 @@ model-index:
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
- value: 85.25
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
@@ -32,16 +32,16 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
32
  ## Model Details
33
 
34
  ### Architecture
35
- - **Preset**: clip_vit_bigg14_cantor_decoupled
36
  - **Sharing Mode**: decoupled
37
- - **Fusion Mode**: cantor_scale
38
- - **Scales**: [384, 512, 768, 1024, 1280, 1536, 1792, 2048]
39
- - **Feature Dim**: 1280
40
- - **Parameters**: 70,207,625
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
- - **Model Variant**: clip_vit_laion_bigg14
45
  - **Epochs**: 5
46
  - **Batch Size**: 512
47
  - **Learning Rate**: 0.001
@@ -51,19 +51,16 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
51
  ## Performance
52
 
53
  ### Best Results
54
- - **Validation Accuracy**: 85.25%
55
- - **Best Epoch**: 4
56
- - **Final Train Accuracy**: 91.29%
57
 
58
  ### Per-Scale Performance
59
- - **Scale 384**: 83.99%
60
- - **Scale 512**: 84.38%
61
- - **Scale 768**: 84.72%
62
- - **Scale 1024**: 84.67%
63
- - **Scale 1280**: 84.46%
64
- - **Scale 1536**: 84.16%
65
- - **Scale 1792**: 84.04%
66
- - **Scale 2048**: 84.23%
67
 
68
 
69
  ## Usage
@@ -80,19 +77,19 @@ AbstractPhil/gated-david/
80
  β”œβ”€β”€ README.md # This file
81
  β”œβ”€β”€ best_model.json # Latest best model info
82
  β”œβ”€β”€ weights/
83
- β”‚ └── david_clip_vit_bigg14_cantor_decoupled/
84
- β”‚ └── 20251104_144102/
85
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
86
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
87
- β”‚ β”œβ”€β”€ best_model_acc85.25.safetensors # ⭐ Accuracy in filename!
88
- β”‚ β”œβ”€β”€ best_model_acc85.25_metadata.json
89
  β”‚ β”œβ”€β”€ final_model.safetensors
90
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
91
  β”‚ β”œβ”€β”€ david_config.json
92
  β”‚ └── train_config.json
93
  └── runs/
94
- └── david_clip_vit_bigg14_cantor_decoupled/
95
- └── 20251104_144102/
96
  └── events.out.tfevents.* # TensorBoard logs
97
  ```
98
 
@@ -105,9 +102,9 @@ from huggingface_hub import hf_hub_download
105
  # Browse available models in MODELS_INDEX.json first!
106
 
107
  # Specify model variant and run
108
- model_name = "david_clip_vit_bigg14_cantor_decoupled"
109
- run_id = "20251104_144102"
110
- accuracy = "85.25" # From MODELS_INDEX.json
111
 
112
  # Download config
113
  config_path = hf_hub_download(
@@ -156,7 +153,7 @@ with torch.no_grad():
156
  ## Architecture Overview
157
 
158
  ### Multi-Scale Processing
159
- David processes inputs at multiple scales (384, 512, 768, 1024, 1280, 1536, 1792, 2048),
160
  allowing it to capture both coarse and fine-grained features.
161
 
162
  ### Feature Geometry
@@ -174,7 +171,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
174
  ```
175
 
176
  ### Fusion Strategy
177
- **cantor_scale**: Intelligently combines predictions from multiple scales.
178
 
179
  ## Training Details
180
 
@@ -198,7 +195,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
198
  author = {AbstractPhil},
199
  year = {2025},
200
  url = {https://huggingface.co/AbstractPhil/gated-david},
201
- note = {Run ID: 20251104_144102}
202
  }
203
  ```
204
 
@@ -213,4 +210,4 @@ Special thanks to Claude (Anthropic) for debugging assistance.
213
 
214
  ---
215
 
216
- *Generated on 2025-11-04 15:01:55*
 
12
  metrics:
13
  - accuracy
14
  model-index:
15
+ - name: David-decoupled-deep_efficiency
16
  results:
17
  - task:
18
  type: image-classification
 
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
+ value: 73.58
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
 
32
  ## Model Details
33
 
34
  ### Architecture
35
+ - **Preset**: high_accuracy
36
  - **Sharing Mode**: decoupled
37
+ - **Fusion Mode**: deep_efficiency
38
+ - **Scales**: [256, 512, 768, 1024, 1280]
39
+ - **Feature Dim**: 512
40
+ - **Parameters**: 14,877,593
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
+ - **Model Variant**: clip_vit_laion_b32
45
  - **Epochs**: 5
46
  - **Batch Size**: 512
47
  - **Learning Rate**: 0.001
 
51
  ## Performance
52
 
53
  ### Best Results
54
+ - **Validation Accuracy**: 73.58%
55
+ - **Best Epoch**: 0
56
+ - **Final Train Accuracy**: 71.95%
57
 
58
  ### Per-Scale Performance
59
+ - **Scale 256**: 69.48%
60
+ - **Scale 512**: 72.49%
61
+ - **Scale 768**: 73.58%
62
+ - **Scale 1024**: 73.70%
63
+ - **Scale 1280**: 73.71%
 
 
 
64
 
65
 
66
  ## Usage
 
77
  β”œβ”€β”€ README.md # This file
78
  β”œβ”€β”€ best_model.json # Latest best model info
79
  β”œβ”€β”€ weights/
80
+ β”‚ └── david_high_accuracy/
81
+ β”‚ └── 20251104_151233/
82
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
83
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
84
+ β”‚ β”œβ”€β”€ best_model_acc73.58.safetensors # ⭐ Accuracy in filename!
85
+ β”‚ β”œβ”€β”€ best_model_acc73.58_metadata.json
86
  β”‚ β”œβ”€β”€ final_model.safetensors
87
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
88
  β”‚ β”œβ”€β”€ david_config.json
89
  β”‚ └── train_config.json
90
  └── runs/
91
+ └── david_high_accuracy/
92
+ └── 20251104_151233/
93
  └── events.out.tfevents.* # TensorBoard logs
94
  ```
95
 
 
102
  # Browse available models in MODELS_INDEX.json first!
103
 
104
  # Specify model variant and run
105
+ model_name = "david_high_accuracy"
106
+ run_id = "20251104_151233"
107
+ accuracy = "73.58" # From MODELS_INDEX.json
108
 
109
  # Download config
110
  config_path = hf_hub_download(
 
153
  ## Architecture Overview
154
 
155
  ### Multi-Scale Processing
156
+ David processes inputs at multiple scales (256, 512, 768, 1024, 1280),
157
  allowing it to capture both coarse and fine-grained features.
158
 
159
  ### Feature Geometry
 
171
  ```
172
 
173
  ### Fusion Strategy
174
+ **deep_efficiency**: Intelligently combines predictions from multiple scales.
175
 
176
  ## Training Details
177
 
 
195
  author = {AbstractPhil},
196
  year = {2025},
197
  url = {https://huggingface.co/AbstractPhil/gated-david},
198
+ note = {Run ID: 20251104_151233}
199
  }
200
  ```
201
 
 
210
 
211
  ---
212
 
213
+ *Generated on 2025-11-04 15:17:06*