AbstractPhil commited on
Commit
ccab461
Β·
verified Β·
1 Parent(s): 1f45d5c

Update README - Run 20251104_154540

Browse files
Files changed (1) hide show
  1. README.md +25 -23
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
- value: 76.50
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
@@ -32,16 +32,16 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
32
  ## Model Details
33
 
34
  ### Architecture
35
- - **Preset**: clip_vit_b16_cantor
36
  - **Sharing Mode**: decoupled
37
  - **Fusion Mode**: cantor_scale
38
- - **Scales**: [256, 512, 768, 1024]
39
  - **Feature Dim**: 512
40
- - **Parameters**: 9,057,029
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
- - **Model Variant**: clip_vit_laion_b32
45
  - **Epochs**: 5
46
  - **Batch Size**: 512
47
  - **Learning Rate**: 0.001
@@ -51,15 +51,17 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
51
  ## Performance
52
 
53
  ### Best Results
54
- - **Validation Accuracy**: 76.50%
55
- - **Best Epoch**: 4
56
- - **Final Train Accuracy**: 82.74%
57
 
58
  ### Per-Scale Performance
59
- - **Scale 256**: 71.27%
60
  - **Scale 512**: 74.69%
61
- - **Scale 768**: 75.03%
62
- - **Scale 1024**: 75.53%
 
 
63
 
64
 
65
  ## Usage
@@ -76,19 +78,19 @@ AbstractPhil/gated-david/
76
  β”œβ”€β”€ README.md # This file
77
  β”œβ”€β”€ best_model.json # Latest best model info
78
  β”œβ”€β”€ weights/
79
- β”‚ └── david_clip_vit_b16_cantor/
80
- β”‚ └── 20251104_152832/
81
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
82
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
83
- β”‚ β”œβ”€β”€ best_model_acc76.50.safetensors # ⭐ Accuracy in filename!
84
- β”‚ β”œβ”€β”€ best_model_acc76.50_metadata.json
85
  β”‚ β”œβ”€β”€ final_model.safetensors
86
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
87
  β”‚ β”œβ”€β”€ david_config.json
88
  β”‚ └── train_config.json
89
  └── runs/
90
- └── david_clip_vit_b16_cantor/
91
- └── 20251104_152832/
92
  └── events.out.tfevents.* # TensorBoard logs
93
  ```
94
 
@@ -101,9 +103,9 @@ from huggingface_hub import hf_hub_download
101
  # Browse available models in MODELS_INDEX.json first!
102
 
103
  # Specify model variant and run
104
- model_name = "david_clip_vit_b16_cantor"
105
- run_id = "20251104_152832"
106
- accuracy = "76.50" # From MODELS_INDEX.json
107
 
108
  # Download config
109
  config_path = hf_hub_download(
@@ -152,7 +154,7 @@ with torch.no_grad():
152
  ## Architecture Overview
153
 
154
  ### Multi-Scale Processing
155
- David processes inputs at multiple scales (256, 512, 768, 1024),
156
  allowing it to capture both coarse and fine-grained features.
157
 
158
  ### Feature Geometry
@@ -194,7 +196,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
194
  author = {AbstractPhil},
195
  year = {2025},
196
  url = {https://huggingface.co/AbstractPhil/gated-david},
197
- note = {Run ID: 20251104_152832}
198
  }
199
  ```
200
 
@@ -209,4 +211,4 @@ Special thanks to Claude (Anthropic) for debugging assistance.
209
 
210
  ---
211
 
212
- *Generated on 2025-11-04 15:39:28*
 
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
+ value: 76.60
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
 
32
  ## Model Details
33
 
34
  ### Architecture
35
+ - **Preset**: clip_vit_b16_cantor_big_window
36
  - **Sharing Mode**: decoupled
37
  - **Fusion Mode**: cantor_scale
38
+ - **Scales**: [256, 512, 768, 1024, 2048, 4096]
39
  - **Feature Dim**: 512
40
+ - **Parameters**: 60,452,103
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
+ - **Model Variant**: clip_vit_b16
45
  - **Epochs**: 5
46
  - **Batch Size**: 512
47
  - **Learning Rate**: 0.001
 
51
  ## Performance
52
 
53
  ### Best Results
54
+ - **Validation Accuracy**: 76.60%
55
+ - **Best Epoch**: 0
56
+ - **Final Train Accuracy**: 75.20%
57
 
58
  ### Per-Scale Performance
59
+ - **Scale 256**: 71.98%
60
  - **Scale 512**: 74.69%
61
+ - **Scale 768**: 75.59%
62
+ - **Scale 1024**: 75.88%
63
+ - **Scale 2048**: 76.12%
64
+ - **Scale 4096**: 75.64%
65
 
66
 
67
  ## Usage
 
78
  β”œβ”€β”€ README.md # This file
79
  β”œβ”€β”€ best_model.json # Latest best model info
80
  β”œβ”€β”€ weights/
81
+ β”‚ └── clip_vit_b16_cantor_big_window/
82
+ β”‚ └── 20251104_154540/
83
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
84
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
85
+ β”‚ β”œβ”€β”€ best_model_acc76.60.safetensors # ⭐ Accuracy in filename!
86
+ β”‚ β”œβ”€β”€ best_model_acc76.60_metadata.json
87
  β”‚ β”œβ”€β”€ final_model.safetensors
88
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
89
  β”‚ β”œβ”€β”€ david_config.json
90
  β”‚ └── train_config.json
91
  └── runs/
92
+ └── clip_vit_b16_cantor_big_window/
93
+ └── 20251104_154540/
94
  └── events.out.tfevents.* # TensorBoard logs
95
  ```
96
 
 
103
  # Browse available models in MODELS_INDEX.json first!
104
 
105
  # Specify model variant and run
106
+ model_name = "clip_vit_b16_cantor_big_window"
107
+ run_id = "20251104_154540"
108
+ accuracy = "76.60" # From MODELS_INDEX.json
109
 
110
  # Download config
111
  config_path = hf_hub_download(
 
154
  ## Architecture Overview
155
 
156
  ### Multi-Scale Processing
157
+ David processes inputs at multiple scales (256, 512, 768, 1024, 2048, 4096),
158
  allowing it to capture both coarse and fine-grained features.
159
 
160
  ### Feature Geometry
 
196
  author = {AbstractPhil},
197
  year = {2025},
198
  url = {https://huggingface.co/AbstractPhil/gated-david},
199
+ note = {Run ID: 20251104_154540}
200
  }
201
  ```
202
 
 
211
 
212
  ---
213
 
214
+ *Generated on 2025-11-04 15:47:54*