Commit History

Fix: Save model locally before push, add explicit push with error handling
a823edb

slivk commited on

Add manual push button and push_model.py script
663a4dd

slivk commited on

fix: OOM - reduce batch size to 1, disable eval in test, reduce max_length to 1024
e225440

slivk commited on

fix: Disable ALL mixed precision + use fp32 compute (T4 bf16 crash fix)
cc9cee1

slivk commited on

fix: Add torch_dtype=float16 to model loading (T4 bf16 compatibility)
3d53220

slivk commited on

Fix: Force library upgrade and ensure float16
02bc115

slivk commited on

Fix: Remove explicit bnb_compute_dtype
3db16e4

slivk commited on

Fix: Correct bnb_compute_dtype to float16 for T4 compatibility
42b05ed

slivk commited on

docs: Update README to reflect T4 GPU (not Zero GPU)
641aaa0

slivk commited on

fix: Explicitly disable bf16 for T4 GPU compatibility
cbaf615

slivk commited on

fix: Add bitsandbytes>=0.41.0
da22234

slivk commited on

fix: Remove @spaces.GPU decorator for paid GPU
c17160f

slivk commited on

feat: Add Qwen2.5-3B training (test + full modes)
bd3c7c9

slivk commited on

initial commit
d92895b
verified

OliverSlivka commited on