T1plus / README.md
HUNGTZE's picture
Upload README.md with huggingface_hub
1afb386 verified
metadata
base_model: THUDM/GLM-4.6V-9B
library_name: peft
pipeline_tag: text-generation
tags:
  - lora
  - transformers
  - glm4
  - vision-language-model

GLM-4.6V SFT LoRA (T1plus)

Fine-tuned LoRA adapter for GLM-4.6V 108B MoE Vision-Language Model.

Model Details

  • Base Model: GLM-4.6V 108B MoE (128 experts, 8 active)
  • Training Method: SFT with LoRA
  • LoRA Rank: 64
  • LoRA Alpha: 128
  • Training Epochs: 2
  • Learning Rate: 2e-05
  • Max Sequence Length: 4096

Training Configuration

  • Batch Size: 1
  • Gradient Accumulation: 8
  • Precision: bfloat16

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "THUDM/GLM-4.6V",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "HUNGTZE/T1plus")

Framework Versions

  • PEFT 0.18.0
  • Transformers 4.x