tmp_chat_peft / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
cb7ea8a
|
raw
history blame
582 Bytes
---
library_name: peft
base_model: WGNW/Llama-2-ko-7b-Chat-auto-gptq-4bit
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.01
- desc_act: False
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- disable_exllama: False
- max_input_length: None
### Framework versions
- PEFT 0.6.0.dev0