File size: 288 Bytes
981b783
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters

### model
model_name_or_path: 
adapter_name_or_path: 
template: qwen
trust_remote_code: true

### export
export_dir: 
export_size: 5
export_device: cpu  # choices: [cpu, auto]
export_legacy_format: false