openPangu-7B LoRA (merged)
This repository contains LoRA-finetuned and merged weights based on
openPangu-Embedded-7B-V1.1. The LoRA adapters were merged into the
base model to produce full weights suitable for standard inference.
Base Model
- Base model:
FreedomIntelligence/openPangu-Embedded-7B-V1.1 - License:
OPENPANGU Model License Agreement v1.0(seeLICENSE)
Training Data
- Private dataset (not released).
Training Procedure
- Finetuning: LoRA using LLaMA-Factory.
- Export: merged full weights with
llamafactory-cli export.
Example (paths are placeholders):
llamafactory-cli export \
--model_name_or_path <base_model_dir> \
--adapter_name_or_path <lora_adapter_dir> \
--template default \
--finetuning_type lora \
--export_dir <export_dir> \
--export_size 2 \
--export_device cpu \
--export_legacy_format False \
--trust_remote_code True
Evaluation
Evaluated with lm-evaluation-harness using vLLM on 4x RTX 4090.
Dates (UTC): 2026-01-04.
GSM8K (5-shot)
- exact_match (strict-match): 0.6171
- exact_match (flexible-extract): 0.5777
C-Eval (valid, 5-shot)
- acc: 0.6241
- acc_norm: 0.6241
Example command (paths are placeholders):
lm_eval --model vllm \
--model_args "pretrained=<model_dir>,tensor_parallel_size=4,dtype=auto,gpu_memory_utilization=0.8,max_model_len=4096,enforce_eager=True,trust_remote_code=True" \
--tasks gsm8k \
--num_fewshot 5 \
--batch_size auto
Usage
This repo includes custom modeling code; trust_remote_code=True is required.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "killer66678/openpangu_7b_lora"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype="auto",
device_map="auto",
)
Limitations and License Notes
- The openPangu license restricts use within the European Union.
- If you distribute a product or service based on this model, the license requires specific attribution and trademark notices.
- As with any LLM, outputs may be incorrect or biased.
Acknowledgements
Thanks to Huawei openPangu for the base model.
- Downloads last month
- 9
Model tree for killer66678/openpangu_7b_lora
Evaluation results
- exact_match (strict-match) on gsm8ktest set self-reported0.617
- exact_match (flexible-extract) on gsm8ktest set self-reported0.578
- acc on ceval/ceval-examself-reported0.624
- acc_norm on ceval/ceval-examself-reported0.624