geo-beatrix / README.md
AbstractPhil's picture
Epoch 100: 64.35%
4d66d52 verified
|
raw
history blame
5.91 kB
metadata
license: mit
tags:
  - image-classification
  - cifar100
  - geometric-learning
  - fractal-encoding
  - in-training
  - no-attention
  - no-cross-entropy
datasets:
  - cifar100
metrics:
  - accuracy
library_name: pytorch
pipeline_tag: image-classification
model-index:
  - name: geo-beatrix-step25-feats10000
    results:
      - task:
          type: image-classification
          name: Image Classification
        dataset:
          name: CIFAR-100
          type: cifar100
        metrics:
          - type: accuracy
            value: 69.08
            name: Test Accuracy
            verified: false

geo-beatrix-step25-feats10000

Geometric Basin Classification for CIFAR-100

🚧 Training in Progress 🚧

Current Status: Epoch 100/200


Current Performance

Metric Value
Best Test Accuracy 69.08%
Best Epoch 190
Current Train Accuracy 72.01%
Current Test Accuracy 64.35%
Current Ξ± (Cantor param) 0.4377
Total Parameters 198,392,937
Training Time 0:56:36

All Training Runs

Timestamp Status Best Epoch Test Acc Train Acc Ξ±
20251010_050315 πŸ”„ 190 69.08% 72.01% 0.4377

Comparison to State-of-the-Art

Model Accuracy Status
geo-beatrix (this model) 69.08% πŸ”„ Training
vit-beatrix-dualstream 66.0% Vision Transformer + Cross-Entropy

βœ… geo-beatrix has surpassed all baselines!


Architecture

  • Base: ResNet-style with residual blocks
  • Channels: 64 β†’ 128 β†’ 256 β†’ 512 β†’ 1024
  • Positional Encoding: Devil's Staircase (Cantor function, 1883)
  • PE Levels: 25
  • PE Features/Level: 10000
  • Classification: Geometric Basin Compatibility (NO cross-entropy)
  • Attention Mechanisms: NONE

Training Configuration

{
  "model_name": "geo-beatrix-step25-feats10000",
  "model_type": "geometric_basin_classifier",
  "num_classes": 100,
  "batch_size": 256,
  "num_epochs": 200,
  "base_learning_rate": 0.001,
  "weight_decay": 0.05,
  "warmup_epochs": 10,
  "pe_levels": 25,
  "pe_features_per_level": 10000,
  "dropout": 0.1,
  "upload_every_n_epochs": 50,
  "alphamix": {
    "enabled": true,
    "range": [
      0.3,
      0.7
    ],
    "spatial_ratio": 0.25,
    "curriculum_start": 0.0,
    "curriculum_end": 0.4
  },
  "architecture": "ResNet-style with Devil's Staircase PE",
  "loss_function": "Geometric Basin Compatibility",
  "cross_entropy": false,
  "attention_mechanisms": false,
  "timestamp": "20251010_050315"
}

Files Structure

β”œβ”€β”€ model.pt                 (BEST overall model - easy access!)
β”œβ”€β”€ model.safetensors        (BEST overall model - easy access!)
β”œβ”€β”€ best_model_info.json     (which epoch/run this came from)
β”œβ”€β”€ runs_history.json        (all training runs and their results)
β”œβ”€β”€ README.md
β”œβ”€β”€ weights/geo-beatrix-step25-feats10000/20251010_050315/
β”‚   β”œβ”€β”€ model.pt                 (best from this training run)
β”‚   β”œβ”€β”€ model.safetensors        (best from this training run)
β”‚   β”œβ”€β”€ config.json
β”‚   β”œβ”€β”€ training_log.txt
β”‚   └── checkpoints/
β”‚       β”œβ”€β”€ checkpoint_epoch_10.safetensors
β”‚       β”œβ”€β”€ checkpoint_epoch_20.safetensors
β”‚       └── checkpoint_epoch_30.safetensors
β”‚       (snapshots every 50 epochs)
└── runs/geo-beatrix-step25-feats10000/20251010_050315/
    β”œβ”€β”€ events.out.tfevents.*    (TensorBoard logs)
    └── metrics.csv              (training metrics)

Note: The root model.pt and model.safetensors always contain the best model across all training runs!


Usage

from huggingface_hub import hf_hub_download
import torch

# EASIEST: Download BEST overall model from root (recommended!)
from safetensors.torch import load_file
model_path = hf_hub_download(
    repo_id="AbstractPhil/geo-beatrix",
    filename="model.safetensors"
)
state_dict = load_file(model_path)
# model.load_state_dict(state_dict)

# Check which epoch/run the best model came from
info_path = hf_hub_download(
    repo_id="AbstractPhil/geo-beatrix",
    filename="best_model_info.json"
)
with open(info_path) as f:
    best_info = json.load(f)
    print(f"Best model: epoch {best_info['epoch']}, {best_info['test_accuracy']:.2f}%")

# Or download from specific training run
model_path = hf_hub_download(
    repo_id="AbstractPhil/geo-beatrix",
    filename="weights/geo-beatrix-step25-feats10000/20251010_050315/model.safetensors"
)

# Download specific epoch checkpoint
epoch_checkpoint = hf_hub_download(
    repo_id="AbstractPhil/geo-beatrix",
    filename="weights/geo-beatrix-step25-feats10000/20251010_050315/checkpoints/checkpoint_epoch_100.safetensors"
)

Training History

Best Checkpoint

  • Epoch: 190
  • Train Acc: 72.01%
  • Test Acc: 69.08%
  • Alpha: 0.4377
  • Loss: 0.0000

Latest 5 Epochs

  • Epoch 96: Train 74.60%, Test 0.00%, Ξ±=0.4397, Loss=0.7321
  • Epoch 97: Train 72.20%, Test 0.00%, Ξ±=0.4383, Loss=0.7152
  • Epoch 98: Train 73.36%, Test 0.00%, Ξ±=0.4374, Loss=0.7278
  • Epoch 99: Train 75.21%, Test 0.00%, Ξ±=0.4399, Loss=0.7168
  • Epoch 100: Train 72.01%, Test 64.35%, Ξ±=0.4377, Loss=0.6921

Training Milestones

  • 🎯 50% Accuracy reached at epoch 35
  • 🎯 60% Accuracy reached at epoch 65
  • πŸ“Š Ξ± β‰₯ 0.40 reached at epoch 9
  • πŸ“Š Ξ± β‰₯ 0.44 (near triadic equilibrium) at epoch 94

Innovation

βœ… NO attention mechanisms
βœ… NO cross-entropy loss
βœ… Fractal positional encoding (Cantor function from 1883)
βœ… Geometric compatibility classification
βœ… Ancient convolutions (ResNet-style from 1990s)


Repository: https://huggingface.co/AbstractPhil/geo-beatrix
Author: AbstractPhil
Framework: PyTorch