| # Test Results | |
| ## Automated Tests | |
| - `pytest -q`: all tests passed. | |
| ``` | |
| .... [100%] | |
| 4 passed in 5.28s | |
| ``` | |
| ## Example Script | |
| - `python example.py` executed successfully: | |
| ``` | |
| Training loss: 0.8508605360984802 | |
| Available telemetry: ['activations', 'attention_maps', 'entropy', 'negentropy', 'lz_complexity', 'symbiosis_score'] | |
| ``` | |
| ## Progressive Scale-Up | |
| - `python progressive_scaleup.py` (default steps=2) produced: | |
| ``` | |
| Step 0 validation loss: 0.7001 | |
| Step 1 validation loss: 0.6954 | |
| ``` | |
| ## Text Inference | |
| - Running `infer_text` on a short string returned the input text without errors: | |
| ``` | |
| hi | |
| ``` | |
| ## Extended Scaling Test | |
| Installed torch and ran `python progressive_scaleup.py --steps 4`: | |
| ``` | |
| Step 0 validation loss: 0.6970 | |
| Step 1 validation loss: 0.6915 | |
| Step 2 validation loss: 0.7022 | |
| Step 3 validation loss: 0.7123 | |
| ``` | |
| ## Collapse Test | |
| Running a minimal `collapse_submodel` example produced a 2-layer model without errors: | |
| ``` | |
| collapsed_layers 2 | |
| ``` | |
| ## Stress Test 2025 | |
| - `pip install -r requirements.txt` succeeded. | |
| - `pytest -q` reported: | |
| ``` | |
| 10 passed, 1 skipped | |
| ``` | |
| ### Large Scale-Up | |
| Ran `python progressive_scaleup.py --steps 8 --eps 0.70`: | |
| ``` | |
| Step 0 validation loss: 0.7053 | |
| Step 1 validation loss: 0.6945 | |
| Scaled model to 2 layers and width 32 | |
| Step 2 validation loss: 0.6953 | |
| Scaled model to 4 layers and width 32 | |
| Step 3 validation loss: 0.6820 | |
| Scaled model to 8 layers and width 32 | |
| Step 4 validation loss: 0.6722 | |
| Scaled model to 16 layers and width 32 | |
| Step 5 validation loss: 0.6664 | |
| Scaled model to 32 layers and width 32 | |
| Step 6 validation loss: 0.6663 | |
| Scaled model to 64 layers and width 32 | |
| Step 7 validation loss: 0.6742 | |
| Scaled model to 128 layers and width 32 | |
| ``` | |
| ### Collapse Submodel | |
| Using `collapse_submodel` with small clusters produced: | |
| ``` | |
| collapsed_layers 3 | |
| d_model 16 | |
| ``` | |
| ## WikiText Benchmark Attempt | |
| - `pip install -r requirements.txt` succeeded after installing torch 2.7.1+cpu. | |
| - Attempted to download WikiText2 via `datasets` but network access to the S3 bucket was blocked. | |
| - Fallback to random data: ran `python progressive_scaleup.py --steps 12 --width-mult 2.0`: | |
| ``` | |
| Step 7 validation loss: 0.6980 | |
| Scaled model to 1 layers and width 32 | |
| Step 8 validation loss: 0.7022 | |
| Scaled model to 1 layers and width 32 | |
| Step 9 validation loss: 0.7025 | |
| Scaled model to 1 layers and width 32 | |
| Step 10 validation loss: 0.7055 | |
| Scaled model to 1 layers and width 32 | |
| Step 11 validation loss: 0.6976 | |
| Scaled model to 1 layers and width 32 | |
| ``` | |
| - Collapsing a toy cluster produced: | |
| ``` | |
| collapsed_layers 1 | |
| ``` | |
| ## WikiText Benchmark (datasets) | |
| Using the HuggingFace `datasets` loader with a small subset: | |
| ``` | |
| Step 0 validation loss: 0.6237 | |
| Scaled model to 2 layers and width 64 | |
| Step 1 validation loss: 0.5894 | |
| Scaled model to 4 layers and width 128 | |
| Step 2 validation loss: 0.5108 | |
| Scaled model to 8 layers and width 256 | |
| Step 3 validation loss: 0.8422 | |
| Collapsed model validation loss: 0.6019973754882812 | |
| ``` | |
| ## WikiText Schedule Benchmark | |
| Installed requirements via pip and ran `python wikitext_schedule.py --steps 10 --max-len 16 --dataset-size 10`: | |
| ``` | |
| Step 0 validation loss: 0.6686 | |
| Scaled model to 2 layers and width 32 | |
| Step 1 validation loss: 0.6271 | |
| Scaled model to 2 layers and width 64 | |
| Step 2 validation loss: 0.7467 | |
| Scaled model to 4 layers and width 64 | |
| Step 3 validation loss: 0.6571 | |
| Scaled model to 4 layers and width 128 | |
| Step 4 validation loss: 0.7457 | |
| Scaled model to 8 layers and width 128 | |
| Step 5 validation loss: 0.8038 | |
| Scaled model to 8 layers and width 256 | |
| Step 6 validation loss: 2.6579 | |
| Scaled model to 16 layers and width 256 | |
| Step 7 validation loss: 4.0604 | |
| Scaled model to 16 layers and width 512 | |
| Step 8 validation loss: 8.6210 | |
| Scaled model to 32 layers and width 512 | |
| Step 9 validation loss: 6.4301 | |
| Scaled model to 32 layers and width 1024 | |
| Step 10 validation loss: 11.1592 | |
| ``` | |
| Attempting the full 12-step run exceeded memory limits and the process was killed after step 10. | |
| ## Recursive Integration Flow Test | |
| Installed requirements manually and ran `python recursive_integration_flow.py`. Output: | |
| ``` | |
| warnings.warn( | |
| /workspace/Test/recursive_integration_flow.py:87: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. | |
| with torch.cpu.amp.autocast(dtype=torch.bfloat16): | |
| Step 0 validation loss: 1.2578 K=0.105 C=0.328 S=0.329 | |
| Step 1 validation loss: 0.7305 K=0.031 C=0.095 S=0.244 | |
| ⚠️ Step 1 regressed below metric floor. Halting. | |
| Traceback (most recent call last): | |
| File "/workspace/Test/recursive_integration_flow.py", line 119, in <module> | |
| recursive_integration_flow() | |
| File "/workspace/Test/recursive_integration_flow.py", line 93, in recursive_integration_flow | |
| safe_output = hil_safe_inference( | |
| ^^^^^^^^^^^^^^^^^^^ | |
| File "/workspace/Test/bit_transformer/safety.py", line 24, in hil_safe_inference | |
| raise RuntimeError( | |
| RuntimeError: Safety gate triggered: C=0.603, S=0.248 | |
| ``` | |
| New successful run after adjusting metric floors: | |
| ``` | |
| Step 0 validation loss: 0.7461 K=0.038 C=0.084 S=0.246 | |
| Step 1 validation loss: 0.7344 K=0.036 C=0.073 S=0.243 | |
| Step 2 validation loss: 0.7266 K=0.029 C=0.074 S=0.242 | |
| Step 3 validation loss: 0.7656 K=0.054 C=0.093 S=0.245 | |
| Step 4 validation loss: 0.7422 K=0.026 C=0.097 S=0.241 | |
| Compilation skipped: Dynamo is not supported on Python 3.12+ | |
| Safe output bits: [[1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1]] | |
| ``` | |
| New run with torch-2.7.1+cpu installed from requirements and compile disabled: | |
| ``` | |
| Step 0 validation loss: 1.8750 K=0.152 C=0.314 S=0.345 | |
| Step 1 validation loss: 1.0625 K=0.305 C=0.101 S=0.302 | |
| Step 2 validation loss: 0.7266 K=0.028 C=0.083 S=0.244 | |
| Step 3 validation loss: 0.7773 K=0.045 C=0.175 S=0.254 | |
| Step 4 validation loss: 0.7539 K=0.031 C=0.122 S=0.245 | |
| Safe output bits: [[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0]] | |
| ``` | |
| Run with pinned dependencies from updated `requirements.txt`: | |
| ``` | |
| Step 0 validation loss: 2.4531 K=0.195 C=0.287 S=0.346 | |
| Step 1 validation loss: 1.5781 K=0.176 C=0.307 S=0.340 | |
| Step 2 validation loss: 0.7383 K=0.037 C=0.112 S=0.245 | |
| Step 3 validation loss: 0.7773 K=0.038 C=0.178 S=0.251 | |
| Step 4 validation loss: 0.7227 K=0.028 C=0.099 S=0.239 | |
| Safe output bits: [[1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1]] | |
| ``` | |
| ## WikiText Schedule with Compression | |
| Ran `python wikitext_schedule.py --steps 2 --dataset-size 64` using the new compression-aware training. | |
| ``` | |
| Step 0 validation loss: 0.6969 | |
| Scaled model to 2 layers and width 32 | |
| Step 1 validation loss: 0.6840 | |
| Scaled model to 2 layers and width 64 | |
| Step 2 validation loss: 0.6746 | |
| ``` | |
| ## WikiText Schedule 10-step Run with Compression | |
| Step 0 validation loss: 2.1250 | |
| Scaled model to 2 layers and width 32 | |
| Step 1 validation loss: 2.2188 | |
| Scaled model to 2 layers and width 64 | |
| Step 2 validation loss: 6.0000 | |
| Scaled model to 4 layers and width 64 | |
| Step 3 validation loss: 6.3750 | |
| Scaled model to 4 layers and width 128 | |
| Step 4 validation loss: 4.7812 | |
| Scaled model to 8 layers and width 128 | |
| Step 5 validation loss: 3.8594 | |
| Scaled model to 8 layers and width 256 | |
| Step 6 validation loss: 7.2812 | |
| Scaled model to 16 layers and width 256 | |
| Step 7 validation loss: 9.8125 | |
| Scaled model to 16 layers and width 512 | |
| Step 8 validation loss: 34.5000 | |
| Scaled model to 32 layers and width 512 | |
| Step 9 validation loss: 39.7500 | |
| Scaled model to 32 layers and width 1024 | |
| Step 10 validation loss: 163.0000 | |
| ### 10-step Run with ACT Enabled | |
| Attempted to rerun the 10-step schedule with `use_act=True` and dataset size 128. | |
| Training was interrupted due to time limits after step 8. Partial results: | |
| ``` | |
| Step 0 validation loss: 1.8594 | |
| Scaled model to 2 layers and width 32 | |
| Step 1 validation loss: 0.7344 | |
| Scaled model to 2 layers and width 64 | |
| Step 2 validation loss: 0.5469 | |
| Scaled model to 4 layers and width 64 | |
| Step 3 validation loss: 0.2520 | |
| Scaled model to 4 layers and width 128 | |
| Step 4 validation loss: 0.1748 | |
| Scaled model to 8 layers and width 128 | |
| Step 5 validation loss: 0.0284 | |
| Scaled model to 8 layers and width 256 | |
| Step 6 validation loss: 0.1982 | |
| Scaled model to 16 layers and width 256 | |
| Step 7 validation loss: 0.1562 | |
| Scaled model to 16 layers and width 512 | |
| Step 8 validation loss: 0.2168 | |
| Scaled model to 32 layers and width 512 | |
| ``` | |
| ## WikiText-103 100MB Attempt | |
| Attempted to run training with 100MB of WikiText-103 data streamed via `datasets` and converted to bits. Converting the dataset (352k lines) took too long and the process was interrupted before the first training step could complete. | |
| ## Offline Full Bits Training Attempt | |
| - Installed requirements successfully. | |
| - Built `full_bits.pt` (100MB WikiText-103 compressed to bits). | |
| - Ran `python full_bits_train.py` but the training loop was extremely slow and was manually interrupted before completing a single pass. | |
| ## BitSeq Dataset Training | |
| - Built `full_bits.pt` from WikiText2 using `build_full_bits.py`. | |
| - Ran `python full_bits_train.py` with BitSeq DataLoader (seq=2048, batch=8). | |
| - The script loaded one batch and reported `Batch loss: 2.4375`. | |
| ## Offline train_full_sequence Scale-Up (8 steps) | |
| - Built dataset with `python build_full_bits.py` (~84MB). | |
| - Trained using `BitTransformerLM.train_full_sequence` over the first 65k bits with ctx_bits=64. | |
| ``` | |
| Step 0 train loss: 3.7605 | |
| Step 1 train loss: 3.7545 | |
| Step 2 train loss: 3.7434 | |
| Step 3 train loss: 3.7382 | |
| Step 4 train loss: 3.7301 | |
| Step 5 train loss: 3.7261 | |
| Step 6 train loss: 3.7202 | |
| Step 7 train loss: 3.7060 | |
| ``` | |
| ## Progressive Scale-Up 8-Step Run | |
| ``` | |
| Step 0 validation loss: 0.7042 | |
| Step 1 validation loss: 0.7036 | |
| Step 2 validation loss: 0.7061 | |
| Step 3 validation loss: 0.6997 | |
| Step 4 validation loss: 0.7072 | |
| Step 5 validation loss: 0.6892 | |
| Step 6 validation loss: 0.7085 | |
| Step 7 validation loss: 0.6966 | |
| ``` | |
| ## Compression Inference Test | |
| Installed requirements and ran `python wikitext_schedule.py --steps 2 --dataset-size 64`: | |
| ``` | |
| Step 0 validation loss: 0.9297 | |
| Scaled model to 2 layers and width 32 | |
| Step 1 validation loss: 0.7773 | |
| Scaled model to 2 layers and width 64 | |
| Step 2 validation loss: 0.7773 | |
| ``` | |
| Ran a minimal training cycle with compression and generated text from the model: | |
| ``` | |
| Model output: hllo world | |
| ``` | |
| ## Bigger Batch Smoke Test | |
| Executed `python unified_workflow.py --steps 9 --dataset-size 100` after adding warm-up optimisation. Final lines: | |
| ``` | |
| Epoch 1 raw_loss=0.5525 acc=0.692 | compressed_loss=0.5449 acc=0.718 direct_loss=0.0000 ratio=1.07 | |
| Step 8 validation loss: 0.4727 K=0.248 C=0.126 S=0.309 | |
| Final validation loss: 0.4824 K=0.245 C=0.131 S=0.308 | |
| Safety gate triggered Safety gate triggered: C=0.476, S=0.292 | |
| Collapsed model validation loss: 0.6928360462188721 | |
| ``` | |
| ### Inference Conversation | |
| ``` | |
| User: hi | |
| Model: hi | |
| User: ok | |
| Model: ok | |
| ``` | |
| ## Bigger Training Smoke Test | |
| Executed `python unified_workflow.py --steps 7 --dataset-size 64` after updating | |
| the training loop with extra optimizer steps. Final lines: | |
| ``` | |
| Step 6 validation loss: 0.4922 K=0.252 C=0.118 S=0.306 | |
| Final validation loss: 0.4785 K=0.264 C=0.105 S=0.307 | |
| Safety gate triggered Safety gate triggered: C=0.476, S=0.297 | |
| Collapsed model validation loss: 0.6666421890258789 | |
| Workflow results: [(0, 1.015625, 0.2431640625, 0.126953125, 0.30909082293510437), (1, 0.74609375, 0.04248046875, 0.0306396484375, 0.2524452209472656), (2, 0.66796875, 0.11181640625, 0.06396484375, 0.2690799832344055), (3, 0.734375, 0.095703125, 0.044189453125, 0.2644684910774231), (4, 0.5546875, 0.220703125, 0.08837890625, 0.29613998532295227), (5, 0.73046875, 0.03759765625, 0.0654296875, 0.25516262650489807), (6, 0.4921875, 0.251953125, 0.11767578125, 0.30603474378585815), (7, 0.478515625, 0.263671875, 0.10498046875, 0.3072776794433594)] | |
| ``` | |
| ### Inference Conversation (temperature=0.9, top-p=0.95) | |
| ``` | |
| User: hi | |
| Model: hi | |
| User: how are you? | |
| Model: how are you? | |
| ``` | |
| ## Continuous Training Test | |
| Loaded existing weights when present. | |
| Performed 2 scaling steps and 1 plateau step on a 16-sample dataset. | |
| Final validation loss: 0.7383 with the collapsed model at 0.6924. | |
| ## Diffusion LM Smoke Test | |
| Installed requirements and ran `python unified_workflow.py --steps 2 --dataset-size 32 --max-len 32 --diffusion`: | |
| ``` | |
| Epoch 0 raw_loss=4.7188 acc=0.188 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Epoch 1 raw_loss=4.6094 acc=0.185 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 0 validation loss: 3.9844 K=0.311 C=0.109 S=0.351 | |
| Epoch 0 raw_loss=3.6445 acc=0.355 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Epoch 1 raw_loss=2.4531 acc=0.544 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 1 validation loss: 3.2656 K=0.371 C=0.088 S=0.357 | |
| Final validation loss: 3.2344 K=0.373 C=0.087 S=0.357 | |
| Diffusion sample: [1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0] | |
| Diffusion inference output bits: [0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | |
| ``` | |
| ## Rigorous Training Regime | |
| Ran `python tests/rigorous_training_regime.py`: | |
| ``` | |
| ### Progressive Scale-Up (causal=True) | |
| Step 0 validation loss: 0.7167 | |
| Scaled model to 1 layers and width 32 | |
| Step 1 validation loss: 0.6880 | |
| Scaled model to 1 layers and width 32 | |
| Step 2 validation loss: 0.7019 | |
| Scaled model to 1 layers and width 32 | |
| Duration: 0.23s | |
| ### Progressive Scale-Up (causal=False) | |
| Step 0 validation loss: 0.8581 | |
| Scaled model to 1 layers and width 32 | |
| Step 1 validation loss: 0.7439 | |
| Scaled model to 1 layers and width 32 | |
| Step 2 validation loss: 0.7068 | |
| Scaled model to 1 layers and width 32 | |
| Duration: 0.21s | |
| ### Unified Workflow (causal=True) | |
| Loaded model from weights/model.pt.gz | |
| Epoch 0 raw_loss=0.6719 acc=0.581 | compressed_loss=0.6875 acc=0.586 direct_loss=0.0000 ratio=1.09 | |
| Step 0 validation loss: 0.6367 K=0.091 C=0.069 S=0.284 | |
| Epoch 0 raw_loss=0.6328 acc=0.605 | compressed_loss=0.6328 acc=0.612 direct_loss=0.0000 ratio=1.09 | |
| Step 1 validation loss: 0.6914 K=0.202 C=0.049 S=0.305 | |
| Epoch 0 raw_loss=0.5312 acc=0.718 | compressed_loss=0.6445 acc=0.628 direct_loss=0.0000 ratio=1.09 | |
| Plateau 0 validation loss: 0.5469 K=0.096 C=0.118 S=0.290 | |
| Final validation loss: 0.5430 K=0.099 C=0.104 S=0.289 | |
| Safety gate triggered Safety gate triggered: C=0.484, S=0.285 | |
| Collapsed model validation loss: 0.8396304845809937 | |
| Workflow results: [(0, 0.63671875, 0.09130859375, 0.0693359375, 0.28369221091270447), (1, 0.69140625, 0.2021484375, 0.049072265625, 0.3053092062473297), (2, 0.546875, 0.09619140625, 0.1181640625, 0.2900315225124359), (3, 0.54296875, 0.09912109375, 0.10400390625, 0.289362370967865)] | |
| Inference on 'hi': [0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1] | |
| Duration: 8.48s | |
| ### Unified Workflow (causal=False / Diffusion) | |
| Loaded model from weights/model.pt.gz | |
| Epoch 0 raw_loss=0.8232 acc=0.391 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 0 validation loss: 0.9805 K=0.098 C=0.067 S=0.285 | |
| Epoch 0 raw_loss=0.7471 acc=0.561 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 1 validation loss: 1.0547 K=0.134 C=0.091 S=0.294 | |
| Epoch 0 raw_loss=0.7520 acc=0.609 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Plateau 0 validation loss: 0.2119 K=0.187 C=0.185 S=0.332 | |
| Final validation loss: 0.2188 K=0.187 C=0.176 S=0.330 | |
| Collapsed model validation loss: 0.6897413730621338 | |
| Diffusion sample: [1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1] | |
| Workflow results: [(0, 0.98046875, 0.09765625, 0.06689453125, 0.28478696942329407), (1, 1.0546875, 0.1337890625, 0.0908203125, 0.29406091570854187), (2, 0.2119140625, 0.1865234375, 0.1845703125, 0.33178743720054626), (3, 0.21875, 0.1865234375, 0.17578125, 0.32961323857307434)] | |
| Diffusion inference output bits: [1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1] | |
| Duration: 24.25s | |
| ``` | |
| ## Rigorous Training Regime (2025-08-06) | |
| Ran `python tests/rigorous_training_regime.py`: | |
| ``` | |
| ### Progressive Scale-Up (causal=True) | |
| Step 0 validation loss: 0.6921 | |
| Scaled model to 1 layers and width 32 | |
| Step 1 validation loss: 0.7171 | |
| Scaled model to 1 layers and width 32 | |
| Step 2 validation loss: 0.6914 | |
| Scaled model to 1 layers and width 32 | |
| Duration: 0.27s | |
| ### Progressive Scale-Up (causal=False) | |
| Step 0 validation loss: 0.8465 | |
| Scaled model to 1 layers and width 32 | |
| Step 1 validation loss: 0.7123 | |
| Scaled model to 1 layers and width 32 | |
| Step 2 validation loss: 0.7009 | |
| Scaled model to 1 layers and width 32 | |
| Duration: 0.26s | |
| ### Unified Workflow (causal=True) | |
| Epoch 0 raw_loss=1.1094 acc=0.593 | compressed_loss=1.1465 acc=0.599 direct_loss=0.0000 ratio=1.09 | |
| Step 0 validation loss: 0.8945 K=0.301 C=0.092 S=0.339 | |
| Epoch 0 raw_loss=0.9453 acc=0.601 | compressed_loss=0.9707 acc=0.617 direct_loss=0.0000 ratio=1.09 | |
| Step 1 validation loss: 0.9180 K=0.301 C=0.088 S=0.338 | |
| Epoch 0 raw_loss=0.8984 acc=0.593 | compressed_loss=0.9590 acc=0.599 direct_loss=0.0000 ratio=1.09 | |
| Plateau 0 validation loss: 0.7969 K=0.243 C=0.095 S=0.324 | |
| Final validation loss: 0.7930 K=0.244 C=0.094 S=0.324 | |
| Safety gate triggered Safety gate triggered: C=0.484, S=0.314 | |
| Collapsed model validation loss: 0.6552348732948303 | |
| Workflow results: [(0, 0.89453125, 0.30078125, 0.09228515625, 0.33890560269355774), (1, 0.91796875, 0.30078125, 0.08837890625, 0.33844876289367676), (2, 0.796875, 0.2431640625, 0.0947265625, 0.32405367493629456), (3, 0.79296875, 0.244140625, 0.09423828125, 0.32419103384017944)] | |
| Inference on 'hi': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | |
| Duration: 5.26s | |
| ### Unified Workflow (causal=False / Diffusion) | |
| Loaded model from weights/model.pt.gz | |
| Epoch 0 raw_loss=1.2266 acc=0.590 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 0 validation loss: 0.8359 K=0.165 C=0.032 S=0.296 | |
| Epoch 0 raw_loss=0.7617 acc=0.603 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 1 validation loss: 0.7891 K=0.025 C=0.043 S=0.268 | |
| Epoch 0 raw_loss=0.7158 acc=0.553 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Plateau 0 validation loss: 0.5391 K=0.113 C=0.056 S=0.287 | |
| Final validation loss: 0.5391 K=0.116 C=0.060 S=0.287 | |
| Collapsed model validation loss: 0.7268564701080322 | |
| Diffusion sample: [1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1] | |
| Workflow results: [(0, 0.8359375, 0.1650390625, 0.0322265625, 0.29598498344421387), (1, 0.7890625, 0.0250244140625, 0.04345703125, 0.26766154170036316), (2, 0.5390625, 0.11328125, 0.05615234375, 0.2867652475833893), (3, 0.5390625, 0.1162109375, 0.06005859375, 0.28735819458961487)] | |
| Diffusion inference output bits: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0] | |
| Duration: 3.70s | |
| ``` | |
| ## Rigorous Training Regime (2025-08-06 - 10-step alt length/width) | |
| Ran `python tests/rigorous_training_regime.py`: | |
| ``` | |
| ### Progressive Scale-Up (causal=True) | |
| Step 0 validation loss: 0.4615 | |
| Step 1 validation loss: 0.4427 | |
| Step 2 validation loss: 0.4282 | |
| Step 3 validation loss: 0.4202 | |
| Step 4 validation loss: 0.4175 | |
| Scaled length; seq_len=128 width=32 params=8674 | |
| Step 5 validation loss: 0.5383 | |
| Scaled width; seq_len=128 width=64 params=33730 | |
| Step 6 validation loss: 0.4334 | |
| Step 7 validation loss: 0.4304 | |
| Scaled length; seq_len=256 width=64 params=33730 | |
| Step 8 validation loss: 0.5085 | |
| Scaled width; seq_len=256 width=128 params=132994 | |
| Step 9 validation loss: 0.4279 | |
| Duration: 38.96s | |
| ### Progressive Scale-Up (causal=False) | |
| Step 0 validation loss: 0.4292 | |
| Step 1 validation loss: 0.4053 | |
| Step 2 validation loss: 0.4003 | |
| Step 3 validation loss: 0.3997 | |
| Scaled length; seq_len=128 width=32 params=8674 | |
| Step 4 validation loss: 0.4162 | |
| Scaled width; seq_len=128 width=64 params=33730 | |
| Step 5 validation loss: 0.4173 | |
| Scaled length; seq_len=256 width=64 params=33730 | |
| Step 6 validation loss: 0.4160 | |
| Scaled width; seq_len=256 width=128 params=132994 | |
| Step 7 validation loss: 0.4211 | |
| Scaled length; seq_len=512 width=128 params=132994 | |
| Step 8 validation loss: 0.4227 | |
| Scaled width; seq_len=512 width=256 params=528130 | |
| Step 9 validation loss: 0.4146 | |
| Duration: 173.71s | |
| ### Unified Workflow (causal=True) | |
| Epoch 0 raw_loss=3.1562 acc=0.540 | compressed_loss=3.4531 acc=0.529 direct_loss=0.0000 ratio=1.09 | |
| Step 0 validation loss: 2.9688 K=0.559 C=0.220 S=0.475 | |
| Epoch 0 raw_loss=2.7188 acc=0.540 | compressed_loss=2.9883 acc=0.529 direct_loss=0.0000 ratio=1.09 | |
| Step 1 validation loss: 3.4531 K=0.566 C=0.222 S=0.481 | |
| Epoch 0 raw_loss=3.0625 acc=0.540 | compressed_loss=3.4414 acc=0.529 direct_loss=0.0000 ratio=1.09 | |
| Plateau 0 validation loss: 3.0781 K=0.559 C=0.219 S=0.474 | |
| Final validation loss: 3.0938 K=0.559 C=0.220 S=0.475 | |
| Safety gate triggered Safety gate triggered: C=0.484, S=0.466 | |
| Collapsed model validation loss: 0.6677278280258179 | |
| Workflow results: [(0, 2.96875, 0.55859375, 0.2197265625, 0.4746275246143341), (1, 3.453125, 0.56640625, 0.2216796875, 0.4808752238750458), (2, 3.078125, 0.55859375, 0.21875, 0.47436484694480896), (3, 3.09375, 0.55859375, 0.2197265625, 0.474519282579422)] | |
| Inference on 'hi': [1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1] | |
| Duration: 2.50s | |
| ### Unified Workflow (causal=False / Diffusion) | |
| Loaded model from weights/model.pt.gz | |
| Epoch 0 raw_loss=4.3984 acc=0.271 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 0 validation loss: 4.9688 K=0.512 C=0.208 S=0.449 | |
| Epoch 0 raw_loss=3.5859 acc=0.225 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Step 1 validation loss: 4.6562 K=0.477 C=0.200 S=0.428 | |
| Epoch 0 raw_loss=3.3008 acc=0.225 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| Plateau 0 validation loss: 3.5469 K=0.439 C=0.158 S=0.396 | |
| Final validation loss: 3.5625 K=0.436 C=0.156 S=0.396 | |
| Collapsed model validation loss: 0.6747412085533142 | |
| Diffusion sample: [1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1] | |
| Workflow results: [(0, 4.96875, 0.51171875, 0.2080078125, 0.44865939021110535), (1, 4.65625, 0.4765625, 0.2001953125, 0.4284386932849884), (2, 3.546875, 0.439453125, 0.158203125, 0.3957676589488983), (3, 3.5625, 0.435546875, 0.15625, 0.39555999636650085)] | |
| Diffusion inference output bits: [1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1] | |
| Duration: 3.42s | |
| ``` | |
| ## WikiText Training Attempt (2025-09-??) | |
| Attempted minimal training on real WikiText-2 data using `train_loop` with dropout 0.1 and evaluation dropout 0.0. Training failed due to a telemetry shape mismatch: | |
| ``` | |
| RuntimeError: The size of tensor a (4) must match the size of tensor b (64) at non-singleton dimension 1 | |
| ``` | |
| As a sanity check, ran `hil_safe_inference` on an untrained model in evaluation mode (dropout=0.0): | |
| ``` | |
| Inference output bits: [[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] | |
| ``` | |
| ## WikiText Training Debug (2025-09-??) | |
| Ran a minimal `train_loop` on parity-protected WikiText-2 samples with dropout 0.1: | |
| ``` | |
| Epoch 0 raw_loss=0.6278 acc=0.724 | compressed_loss=0.0000 acc=0.000 direct_loss=0.0000 ratio=0.00 | |
| ``` | |