BWSK GPT-2 Small
GPT-2 Small (124M params) trained in 6 variants (3 BWSK modes x 2 experiments) on WikiText-2 with full convergence training and early stopping.
This repo contains all model weights, configs, and training results in a single consolidated repository.
What is BWSK?
BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.
Model Overview
| Property | Value |
|---|---|
| Base Model | openai-community/gpt2 |
| Architecture | Transformer (causal_lm) |
| Parameters | 124M |
| Dataset | WikiText-2 |
| Eval Metric | Perplexity |
S/K Classification
| Type | Ratio |
|---|---|
| S-type (information-preserving) | 60.8% |
| K-type (information-erasing) | 39.2% |
Fine-tune Results
| Mode | Final Loss | Val Perplexity | Test Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 2.8851 | 18.59 | 18.07 | 5.7 GB | 8.6m | 5 |
| BWSK Analyzed | 3.0312 | 18.59 | 18.09 | 5.7 GB | 8.7m | 5 |
| BWSK Reversible | 2.8376 | 18.58 | 18.09 | 3.6 GB | 10.5m | 5 |
Memory savings (reversible vs conventional): 36.8%
From Scratch Results
| Mode | Final Loss | Val Perplexity | Test Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 4.9907 | 291.30 | 296.78 | 5.7 GB | 8.7m | 5 |
| BWSK Analyzed | 4.9437 | 289.78 | 292.92 | 5.7 GB | 8.8m | 5 |
| BWSK Reversible | 4.7981 | 293.44 | 299.27 | 3.6 GB | 10.3m | 5 |
Memory savings (reversible vs conventional): 36.8%
Repository Structure
βββ README.md
βββ results.json
βββ finetune-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
Usage
Load a specific variant:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load fine-tuned conventional variant
model = AutoModelForCausalLM.from_pretrained(
"tzervas/bwsk-gpt2-small", subfolder="finetune-conventional"
)
tokenizer = AutoTokenizer.from_pretrained(
"tzervas/bwsk-gpt2-small", subfolder="finetune-conventional"
)
# Load from-scratch BWSK reversible variant
model = AutoModelForCausalLM.from_pretrained(
"tzervas/bwsk-gpt2-small", subfolder="scratch-bwsk-reversible"
)
Training Configuration
| Setting | Value |
|---|---|
| Optimizer | AdamW |
| LR (fine-tune) | 5e-05 |
| LR (from-scratch) | 3e-04 |
| LR Schedule | Cosine with warmup |
| Max Grad Norm | 1.0 |
| Mixed Precision | AMP (float16) |
| Early Stopping | Patience 3 |
| Batch Size | 4 |
| Sequence Length | 512 |
Links
Citation
@software{zervas2026bwsk,
author = {Zervas, Tyler},
title = {BWSK: Combinator-Typed Neural Network Analysis},
year = {2026},
url = {https://github.com/tzervas/ai-s-combinator},
}
License
MIT
Model tree for tzervas/bwsk-gpt2-small
Base model
openai-community/gpt2Dataset used to train tzervas/bwsk-gpt2-small
Evaluation results
- perplexity on wikitextself-reported18.075
- perplexity on wikitextself-reported18.087
- perplexity on wikitextself-reported18.090
- perplexity on wikitextself-reported296.781
- perplexity on wikitextself-reported292.923
- perplexity on wikitextself-reported299.270