GLM-5-381464351232-REAP

This repository now hosts the BF16 GLM-5 checkpoint produced by a 50% REAP prune. The actual checkpoint contents are the BF16 files described below.

Checkpoint

  • Base model: GLM-5-BF16
  • Architecture: GlmMoeDsaForCausalLM
  • Method: refusal_contrast_reap
  • Compression ratio: 0.50
  • Seed: 42
  • Router renormalization: true
  • Parameters: 381,464,351,232
  • Total safetensors size: 762,928,740,864 bytes
  • Shards: 17
  • Precision: BF16

Provenance

  • Observation run: glm5-grouped-22k-20260331T172330Z
  • Calibration dataset: combined
  • Prune output directory: /data0/external_research/glm5-layerwise-reap-artifacts/GLM-5-BF16/combined/pruned_models/layerwise_refusal_contrast_reap-renorm_true-seed_42-0.50

Files

  • model-00001-of-00017.safetensors through model-00017-of-00017.safetensors
  • model.safetensors.index.json
  • config.json
  • generation_config.json
  • chat_template.jinja
  • tokenizer.json
  • tokenizer_config.json
  • reap_layerwise_args.yaml

Notes

  • This upload replaces the older multi-shard checkpoint previously hosted in this repo.
  • The metadata above reflects the actual checkpoint contents as of 2026-04-05.
Downloads last month
971
Safetensors
Model size
381B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xSero/GLM-5-REAP-381B

Base model

zai-org/GLM-5
Finetuned
(36)
this model