AbstractPhil commited on
Commit
41c2aad
Β·
verified Β·
1 Parent(s): 1263159

Update README - Run 20251104_140742

Browse files
Files changed (1) hide show
  1. README.md +30 -27
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
- value: 82.86
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
@@ -32,18 +32,18 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
32
  ## Model Details
33
 
34
  ### Architecture
35
- - **Preset**: clip_vit_l14_cantor
36
  - **Sharing Mode**: partial_shared
37
  - **Fusion Mode**: cantor_scale
38
- - **Scales**: [384, 768, 1024, 1280, 1536]
39
- - **Feature Dim**: 768
40
- - **Parameters**: 32,436,998
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
- - **Model Variant**: clip_vit_l14
45
  - **Epochs**: 5
46
- - **Batch Size**: 1024
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
49
  - **Cayley Loss**: False
@@ -51,16 +51,19 @@ as class prototypes with role-weighted similarity computation (Rose Loss).
51
  ## Performance
52
 
53
  ### Best Results
54
- - **Validation Accuracy**: 82.86%
55
- - **Best Epoch**: 3
56
- - **Final Train Accuracy**: 90.33%
57
 
58
  ### Per-Scale Performance
59
- - **Scale 384**: 82.21%
60
- - **Scale 768**: 82.47%
61
- - **Scale 1024**: 82.25%
62
- - **Scale 1280**: 82.17%
63
- - **Scale 1536**: 82.25%
 
 
 
64
 
65
 
66
  ## Usage
@@ -77,19 +80,19 @@ AbstractPhil/gated-david/
77
  β”œβ”€β”€ README.md # This file
78
  β”œβ”€β”€ best_model.json # Latest best model info
79
  β”œβ”€β”€ weights/
80
- β”‚ └── david_clip_vit_l14_cantor/
81
- β”‚ └── 20251104_133602/
82
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
83
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
84
- β”‚ β”œβ”€β”€ best_model_acc82.86.safetensors # ⭐ Accuracy in filename!
85
- β”‚ β”œβ”€β”€ best_model_acc82.86_metadata.json
86
  β”‚ β”œβ”€β”€ final_model.safetensors
87
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
88
  β”‚ β”œβ”€β”€ david_config.json
89
  β”‚ └── train_config.json
90
  └── runs/
91
- └── david_clip_vit_l14_cantor/
92
- └── 20251104_133602/
93
  └── events.out.tfevents.* # TensorBoard logs
94
  ```
95
 
@@ -102,9 +105,9 @@ from huggingface_hub import hf_hub_download
102
  # Browse available models in MODELS_INDEX.json first!
103
 
104
  # Specify model variant and run
105
- model_name = "david_clip_vit_l14_cantor"
106
- run_id = "20251104_133602"
107
- accuracy = "82.86" # From MODELS_INDEX.json
108
 
109
  # Download config
110
  config_path = hf_hub_download(
@@ -153,7 +156,7 @@ with torch.no_grad():
153
  ## Architecture Overview
154
 
155
  ### Multi-Scale Processing
156
- David processes inputs at multiple scales (384, 768, 1024, 1280, 1536),
157
  allowing it to capture both coarse and fine-grained features.
158
 
159
  ### Feature Geometry
@@ -195,7 +198,7 @@ score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
195
  author = {AbstractPhil},
196
  year = {2025},
197
  url = {https://huggingface.co/AbstractPhil/gated-david},
198
- note = {Run ID: 20251104_133602}
199
  }
200
  ```
201
 
@@ -210,4 +213,4 @@ Special thanks to Claude (Anthropic) for debugging assistance.
210
 
211
  ---
212
 
213
- *Generated on 2025-11-04 13:51:54*
 
21
  type: imagenet-1k
22
  metrics:
23
  - type: accuracy
24
+ value: 83.52
25
  ---
26
 
27
  # David: Multi-Scale Feature Classifier
 
32
  ## Model Details
33
 
34
  ### Architecture
35
+ - **Preset**: clip_vit_bigg14_cantor
36
  - **Sharing Mode**: partial_shared
37
  - **Fusion Mode**: cantor_scale
38
+ - **Scales**: [384, 512, 768, 1024, 1280, 1536, 1792, 2048]
39
+ - **Feature Dim**: 1280
40
+ - **Parameters**: 82,601,993
41
 
42
  ### Training Configuration
43
  - **Dataset**: AbstractPhil/imagenet-clip-features-orderly
44
+ - **Model Variant**: clip_vit_laion_bigg14
45
  - **Epochs**: 5
46
+ - **Batch Size**: 512
47
  - **Learning Rate**: 0.001
48
  - **Rose Loss Weight**: 0.1 β†’ 0.5
49
  - **Cayley Loss**: False
 
51
  ## Performance
52
 
53
  ### Best Results
54
+ - **Validation Accuracy**: 83.52%
55
+ - **Best Epoch**: 0
56
+ - **Final Train Accuracy**: 82.03%
57
 
58
  ### Per-Scale Performance
59
+ - **Scale 384**: 82.86%
60
+ - **Scale 512**: 83.22%
61
+ - **Scale 768**: 83.09%
62
+ - **Scale 1024**: 83.15%
63
+ - **Scale 1280**: 83.17%
64
+ - **Scale 1536**: 83.18%
65
+ - **Scale 1792**: 82.98%
66
+ - **Scale 2048**: 82.79%
67
 
68
 
69
  ## Usage
 
80
  β”œβ”€β”€ README.md # This file
81
  β”œβ”€β”€ best_model.json # Latest best model info
82
  β”œβ”€β”€ weights/
83
+ β”‚ └── david_clip_vit_bigg14_cantor/
84
+ β”‚ └── 20251104_140742/
85
  β”‚ β”œβ”€β”€ MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
86
  β”‚ β”œβ”€β”€ training_history.json # πŸ“ˆ Epoch-by-epoch training curve
87
+ β”‚ β”œβ”€β”€ best_model_acc83.52.safetensors # ⭐ Accuracy in filename!
88
+ β”‚ β”œβ”€β”€ best_model_acc83.52_metadata.json
89
  β”‚ β”œβ”€β”€ final_model.safetensors
90
  β”‚ β”œβ”€β”€ checkpoint_epoch_X_accYY.YY.safetensors
91
  β”‚ β”œβ”€β”€ david_config.json
92
  β”‚ └── train_config.json
93
  └── runs/
94
+ └── david_clip_vit_bigg14_cantor/
95
+ └── 20251104_140742/
96
  └── events.out.tfevents.* # TensorBoard logs
97
  ```
98
 
 
105
  # Browse available models in MODELS_INDEX.json first!
106
 
107
  # Specify model variant and run
108
+ model_name = "david_clip_vit_bigg14_cantor"
109
+ run_id = "20251104_140742"
110
+ accuracy = "83.52" # From MODELS_INDEX.json
111
 
112
  # Download config
113
  config_path = hf_hub_download(
 
156
  ## Architecture Overview
157
 
158
  ### Multi-Scale Processing
159
+ David processes inputs at multiple scales (384, 512, 768, 1024, 1280, 1536, 1792, 2048),
160
  allowing it to capture both coarse and fine-grained features.
161
 
162
  ### Feature Geometry
 
198
  author = {AbstractPhil},
199
  year = {2025},
200
  url = {https://huggingface.co/AbstractPhil/gated-david},
201
+ note = {Run ID: 20251104_140742}
202
  }
203
  ```
204
 
 
213
 
214
  ---
215
 
216
+ *Generated on 2025-11-04 14:16:43*