GLM-5 Empty LoRA Adapter (All-Linear + MoE Experts)
Model Summary
This repository contains an empty-initialized PEFT LoRA adapter for zai-org/GLM-5.
It is intended for:
- LoRA loading/integration tests
- Runtime compatibility checks (PEFT / vLLM)
- A clean initialization starting point before actual LoRA training
This adapter is initialized as a near no-op:
lora_A: Kaiming-uniformlora_B: zeros
So generation quality should be close to the base model before any fine-tuning.
Model Details
- Developed by: This script
- Model type: PEFT LoRA adapter checkpoint
- Base model:
zai-org/GLM-5 - Language(s): Same as base model
- License: Same as base model license
- Framework: PEFT
Adapter Construction
This checkpoint was generated programmatically (not fine-tuned from data), targeting:
- all linear-like modules (excluding
lm_head) - detected MoE expert projections (
gate_proj,up_proj,down_proj) and gate when available
Intended Use
- Verifying LoRA checkpoint loading
- Testing MoE LoRA plumbing
- Serving/inference pipeline validation
Out-of-Scope Use
- Task performance improvement without training
- Benchmark comparisons against fine-tuned adapters
Training Details
No training was performed.
This is an initialization-only adapter checkpoint.
Evaluation
No task evaluation metrics are reported for this adapter.
Expected behavior is close to the base model due to zero-initialized lora_B.
Risks and Limitations
- Inherits all limitations and biases of the base model.
- Not suitable as a production task adapter without fine-tuning.
- Minor output differences may still appear due to runtime/kernel nondeterminism.
Usage
Transformers + PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = "zai-org/GLM-5"
adapter = "/path/to/this/adapter"
tokenizer = AutoTokenizer.from_pretrained(base, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(base, trust_remote_code=True)
model = PeftModel.from_pretrained(model, adapter)
vLLM
from vllm import LLM, SamplingParams
from vllm.lora.request import LoRARequest
llm = LLM(
model="zai-org/GLM-5",
trust_remote_code=True,
enable_lora=True,
max_loras=1,
max_lora_rank=8, # set >= adapter rank
)
outputs = llm.generate(
["Hello!"],
SamplingParams(temperature=0.0, max_tokens=32),
lora_request=LoRARequest("empty-lora", 1, "/path/to/this/adapter"),
)
print(outputs[0].outputs[0].text)
Framework Versions
- PEFT 0.18.1
- Downloads last month
- 23
Model tree for HollowMan6/GLM-5-NOOP-LoRA
Base model
zai-org/GLM-5