Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -6,6 +6,7 @@ tags:
|
|
| 6 |
- safetensors
|
| 7 |
- vision-transformer
|
| 8 |
- warm-restarts
|
|
|
|
| 9 |
library_name: pytorch
|
| 10 |
datasets:
|
| 11 |
- cifar10
|
|
@@ -16,80 +17,29 @@ metrics:
|
|
| 16 |
|
| 17 |
# vit-beans-v3
|
| 18 |
|
| 19 |
-
**Geometric Deep Learning with Cantor Multihead Fusion +
|
| 20 |
|
| 21 |
-
This repository contains
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
This model uses **AdamW with Cosine Annealing Warm Restarts** (SGDR):
|
| 26 |
-
- **Drop phase**: LR decays from 0.0003 β 1e-07 over 12 epochs
|
| 27 |
-
- **Restart phase**: LR jumps back to 0.0003 to explore new regions
|
| 28 |
-
- **Cycle multiplier**: Each cycle is 1.75x longer than previous
|
| 29 |
-
- **Benefits**: Automatic exploration + exploitation, finds better minima, robust training
|
| 30 |
-
|
| 31 |
-
### π LR Boost at Restarts (NEW!)
|
| 32 |
-
This run uses **restart_lr_mult = 1.2x**:
|
| 33 |
-
- Normal restart: 3e-4 β 1e-7 β restart at 3e-4
|
| 34 |
-
- **Boosted restart**: 3e-4 β 1e-7 β restart at 3.60e-04 (1.2x!)
|
| 35 |
-
- Creates **wider exploration curves** to escape solidified local minima
|
| 36 |
-
- Each restart provides progressively stronger exploration boost
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
### Restart Schedule
|
| 40 |
-
```
|
| 41 |
-
Epochs 0-12: LR: 0.0003 β 1e-07 (first cycle)
|
| 42 |
-
Epoch 12: LR: RESTART to 0.00035999999999999997 π
|
| 43 |
-
Epochs 12-33.0: LR: 0.00035999999999999997 β 1e-07 (longer cycle)
|
| 44 |
-
...
|
| 45 |
-
```
|
| 46 |
|
| 47 |
## Current Run
|
| 48 |
|
| 49 |
-
**Latest**: `
|
| 50 |
- **Dataset**: CIFAR100
|
| 51 |
-
- **Fusion Mode**:
|
| 52 |
-
- **
|
| 53 |
-
- **
|
| 54 |
-
- **Restart LR Mult**: 1.2x
|
| 55 |
-
- **Architecture**: 12 blocks, 9 heads
|
| 56 |
-
- **Simplex**: 8-simplex (9 vertices)
|
| 57 |
-
|
| 58 |
-
## Architecture
|
| 59 |
-
|
| 60 |
-
The Cantor Fusion architecture uses:
|
| 61 |
-
- **Geometric Routing**: Pentachoron (5-simplex) structures for token routing
|
| 62 |
-
- **Cantor Multihead Fusion**: Multiple fusion heads with geometric attention
|
| 63 |
-
- **Beatrix Consciousness Routing**: Optional consciousness-aware token fusion
|
| 64 |
-
- **SafeTensors Format**: All model weights use SafeTensors (not pickle)
|
| 65 |
-
|
| 66 |
-
## Usage
|
| 67 |
-
```python
|
| 68 |
-
from huggingface_hub import hf_hub_download
|
| 69 |
-
from safetensors.torch import load_file
|
| 70 |
-
|
| 71 |
-
model_path = hf_hub_download(
|
| 72 |
-
repo_id="AbstractPhil/vit-beans-v3",
|
| 73 |
-
filename="runs/YOUR_RUN_NAME/checkpoints/best_model.safetensors"
|
| 74 |
-
)
|
| 75 |
-
|
| 76 |
-
state_dict = load_file(model_path)
|
| 77 |
-
model.load_state_dict(state_dict)
|
| 78 |
-
```
|
| 79 |
-
|
| 80 |
-
## Citation
|
| 81 |
-
```bibtex
|
| 82 |
-
@misc{vit_beans_v3,
|
| 83 |
-
author = {AbstractPhil},
|
| 84 |
-
title = {vit-beans-v3: Geometric Deep Learning with Warm Restarts},
|
| 85 |
-
year = {2025},
|
| 86 |
-
publisher = {HuggingFace},
|
| 87 |
-
url = {https://huggingface.co/AbstractPhil/vit-beans-v3}
|
| 88 |
-
}
|
| 89 |
-
```
|
| 90 |
|
| 91 |
---
|
| 92 |
|
| 93 |
**Repository maintained by**: [@AbstractPhil](https://huggingface.co/AbstractPhil)
|
| 94 |
-
|
| 95 |
-
**Latest update**: 2025-11-23 16:02:20
|
|
|
|
| 6 |
- safetensors
|
| 7 |
- vision-transformer
|
| 8 |
- warm-restarts
|
| 9 |
+
- geometric-coalescence
|
| 10 |
library_name: pytorch
|
| 11 |
datasets:
|
| 12 |
- cifar10
|
|
|
|
| 17 |
|
| 18 |
# vit-beans-v3
|
| 19 |
|
| 20 |
+
**Geometric Deep Learning with Cantor Multihead Fusion + Shatter-Reconstruct Training**
|
| 21 |
|
| 22 |
+
This repository contains training runs using Cantor fusion architecture with:
|
| 23 |
+
- Pentachoron (5-simplex) structures for geometric routing
|
| 24 |
+
- CosineAnnealingWarmRestarts for exploration cycles
|
| 25 |
+
- GeometricCoalescenceLoss for shatter-reconstruct training
|
| 26 |
|
| 27 |
+
### π LR Boost + Geometric Coalescence
|
| 28 |
+
This run uses **restart_lr_mult = 1.15x** with **GeometricCoalescenceLoss**:
|
| 29 |
+
- LR boosts create aggressive exploration cycles
|
| 30 |
+
- Coalescence loss provides geometric scaffolding during weight thrashing
|
| 31 |
+
- Adaptive weighting: 0.1 β 0.8 during LR spikes
|
| 32 |
+
- Model reconstructs from geometric first principles when patterns shatter
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Current Run
|
| 36 |
|
| 37 |
+
**Latest**: `cifar100_learned_ADAMW_WarmRestart_boost1.15x_coal0.5_20251124_010657`
|
| 38 |
- **Dataset**: CIFAR100
|
| 39 |
+
- **Fusion Mode**: learned
|
| 40 |
+
- **Coalescence**: Ξ»=0.5 β
|
| 41 |
+
- **LR Boost**: 1.15x π
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
---
|
| 44 |
|
| 45 |
**Repository maintained by**: [@AbstractPhil](https://huggingface.co/AbstractPhil)
|
|
|
|
|
|