π₯ GLM-4.7-Flash-abliterated (31B) + 4-bit + LoRA - official loading 1d911cc Desorden1337 commited on 16 days ago
Fix: use transformers from git for glm4_moe_lite support 6e9a66b Desorden1337 commited on 16 days ago
π L40S x4 optimized: Flash Attention, 3 epochs, batch 64 f779211 Desorden1337 commited on 16 days ago
Upload run_training_direct.py with huggingface_hub 185bca3 verified Darin Leonhart commited on 16 days ago
Upload finetune_setup.py with huggingface_hub 486fc66 verified Darin Leonhart commited on 16 days ago
Upload d1337_cipher_complete.jsonl with huggingface_hub 8bb3088 verified Darin Leonhart commited on 16 days ago