BWSK EfficientNet-B0
EfficientNet-B0 (5M params) trained in 6 variants (3 BWSK modes x 2 experiments) on CIFAR-10 with full convergence training and early stopping.
This repo contains all model weights, configs, and training results in a single consolidated repository.
What is BWSK?
BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.
Model Overview
| Property | Value |
|---|---|
| Base Model | google/efficientnet-b0 |
| Architecture | Cnn (image_cls) |
| Parameters | 5M |
| Dataset | CIFAR-10 |
| Eval Metric | Accuracy |
S/K Classification
| Type | Ratio |
|---|---|
| S-type (information-preserving) | 33.5% |
| K-type (information-erasing) | 59.6% |
| Gray (context-dependent) | 7.0% |
Fine-tune Results
| Mode | Final Loss | Val Accuracy | Test Accuracy | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 0.3806 | 89.0% | 89.6% | 2.8 GB | 57s | 2 |
| BWSK Analyzed | 0.2952 | 89.3% | 88.5% | 2.8 GB | 58s | 2 |
| BWSK Reversible | 0.2530 | 90.1% | 90.0% | 2.8 GB | 57s | 2 |
Memory savings (reversible vs conventional): 0.0%
From Scratch Results
| Mode | Final Loss | Val Accuracy | Test Accuracy | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 0.2993 | 87.4% | 87.4% | 2.8 GB | 9.0m | 10 |
| BWSK Analyzed | 0.5080 | 79.4% | 78.8% | 2.8 GB | 4.8m | 6 |
| BWSK Reversible | 0.3454 | 88.1% | 87.1% | 2.8 GB | 8.9m | 10 |
Memory savings (reversible vs conventional): 0.0%
Repository Structure
βββ README.md
βββ results.json
βββ finetune-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
Usage
Load a specific variant:
import torch
# Load fine-tuned conventional variant
# Weights are in the finetune-conventional/ subdirectory
Training Configuration
| Setting | Value |
|---|---|
| Optimizer | AdamW |
| LR (fine-tune) | 1e-03 |
| LR (from-scratch) | 5e-03 |
| LR Schedule | Cosine with warmup |
| Max Grad Norm | 1.0 |
| Mixed Precision | AMP (float16) |
| Early Stopping | Patience 3 |
| Batch Size | 32 |
Links
Citation
@software{zervas2026bwsk,
author = {Zervas, Tyler},
title = {BWSK: Combinator-Typed Neural Network Analysis},
year = {2026},
url = {https://github.com/tzervas/ai-s-combinator},
}
License
MIT
Model tree for tzervas/bwsk-efficientnet-b0
Base model
google/efficientnet-b0Dataset used to train tzervas/bwsk-efficientnet-b0
Evaluation results
- accuracy on cifar10self-reported0.896
- accuracy on cifar10self-reported0.884
- accuracy on cifar10self-reported0.899
- accuracy on cifar10self-reported0.874
- accuracy on cifar10self-reported0.788
- accuracy on cifar10self-reported0.871