glm4v-sft-lora / README.md
HUNGTZE's picture
Upload GLM-4.6V SFT LoRA checkpoint (step 600)
4dfb733 verified
metadata
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
language:
  - zh
  - en
tags:
  - lora
  - transformers
  - glm
  - financial

GLM-4.6V SFT LoRA

這是基於 GLM-4.6V 108B MoE 模型進行 SFT (Supervised Fine-Tuning) 訓練的 LoRA adapter。

模型資訊

  • Base Model: GLM-4.6V 108B MoE (128 experts, 8 active)
  • Training Method: QLoRA (4-bit quantization + LoRA)
  • LoRA Rank: 64
  • LoRA Alpha: 128
  • Training Steps: 600
  • Max Sequence Length: 4096

訓練配置

  • Hardware: 4x NVIDIA H100 80GB
  • Precision: BF16
  • Optimizer: AdamW
  • Learning Rate: 2e-5
  • Batch Size: 1 per GPU
  • Gradient Accumulation: 8

使用方式

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# 載入基礎模型
base_model = AutoModelForCausalLM.from_pretrained(
    "your-base-model-path",
    device_map="auto",
    torch_dtype=torch.bfloat16
)

# 載入 LoRA adapter
model = PeftModel.from_pretrained(base_model, "HUNGTZE/glm4v-sft-lora")

# 載入 tokenizer
tokenizer = AutoTokenizer.from_pretrained("HUNGTZE/glm4v-sft-lora")

Framework Versions

  • PEFT: 0.18.0
  • Transformers: 4.x
  • PyTorch: 2.x