BWSK EfficientNet-B0

EfficientNet-B0 (5M params) trained in 6 variants (3 BWSK modes x 2 experiments) on CIFAR-10 with full convergence training and early stopping.

This repo contains all model weights, configs, and training results in a single consolidated repository.

What is BWSK?

BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.

Model Overview

Property Value
Base Model google/efficientnet-b0
Architecture Cnn (image_cls)
Parameters 5M
Dataset CIFAR-10
Eval Metric Accuracy

S/K Classification

Type Ratio
S-type (information-preserving) 33.5%
K-type (information-erasing) 59.6%
Gray (context-dependent) 7.0%

Fine-tune Results

Mode Final Loss Val Accuracy Test Accuracy Peak Memory Time Epochs
Conventional 0.3806 89.0% 89.6% 2.8 GB 57s 2
BWSK Analyzed 0.2952 89.3% 88.5% 2.8 GB 58s 2
BWSK Reversible 0.2530 90.1% 90.0% 2.8 GB 57s 2

Memory savings (reversible vs conventional): 0.0%

From Scratch Results

Mode Final Loss Val Accuracy Test Accuracy Peak Memory Time Epochs
Conventional 0.2993 87.4% 87.4% 2.8 GB 9.0m 10
BWSK Analyzed 0.5080 79.4% 78.8% 2.8 GB 4.8m 6
BWSK Reversible 0.3454 88.1% 87.1% 2.8 GB 8.9m 10

Memory savings (reversible vs conventional): 0.0%

Repository Structure

β”œβ”€β”€ README.md
β”œβ”€β”€ results.json
β”œβ”€β”€ finetune-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json

Usage

Load a specific variant:

import torch
# Load fine-tuned conventional variant
# Weights are in the finetune-conventional/ subdirectory

Training Configuration

Setting Value
Optimizer AdamW
LR (fine-tune) 1e-03
LR (from-scratch) 5e-03
LR Schedule Cosine with warmup
Max Grad Norm 1.0
Mixed Precision AMP (float16)
Early Stopping Patience 3
Batch Size 32

Links

Citation

@software{zervas2026bwsk,
  author = {Zervas, Tyler},
  title = {BWSK: Combinator-Typed Neural Network Analysis},
  year = {2026},
  url = {https://github.com/tzervas/ai-s-combinator},
}

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tzervas/bwsk-efficientnet-b0

Finetuned
(49)
this model

Dataset used to train tzervas/bwsk-efficientnet-b0

Evaluation results