Instructions to use Rexe/Deci-Decicoder-1b-lora-coder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Rexe/Deci-Decicoder-1b-lora-coder with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Deci/DeciCoder-1b") model = PeftModel.from_pretrained(base_model, "Rexe/Deci-Decicoder-1b-lora-coder") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,7 @@ The following `bitsandbytes` quantization config was used during training:
|
|
| 12 |
- llm_int8_skip_modules: None
|
| 13 |
- llm_int8_enable_fp32_cpu_offload: False
|
| 14 |
- llm_int8_has_fp16_weight: False
|
|
|
|
| 15 |
- bnb_4bit_quant_type: fp4
|
| 16 |
- bnb_4bit_use_double_quant: False
|
| 17 |
- bnb_4bit_compute_dtype: float32
|
|
|
|
| 12 |
- llm_int8_skip_modules: None
|
| 13 |
- llm_int8_enable_fp32_cpu_offload: False
|
| 14 |
- llm_int8_has_fp16_weight: False
|
| 15 |
+
- inference: true
|
| 16 |
- bnb_4bit_quant_type: fp4
|
| 17 |
- bnb_4bit_use_double_quant: False
|
| 18 |
- bnb_4bit_compute_dtype: float32
|