Added PEFT instructions
Browse files
README.md
CHANGED
|
@@ -73,6 +73,19 @@ The training data consists of 100,000 Python functions and their docstrings extr
|
|
| 73 |
- **Clarity:** Measures readability using simple, unambiguous language. Calculated using the Flesch-Kincaid readability score.
|
| 74 |
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
#### Hardware
|
| 77 |
|
| 78 |
Fine-tuning was performed using an Intel 12900K CPU, an Nvidia RTX-3090 GPU, and 64 GB RAM. Total fine-tuning time was 48 GPU hours.
|
|
|
|
| 73 |
- **Clarity:** Measures readability using simple, unambiguous language. Calculated using the Flesch-Kincaid readability score.
|
| 74 |
|
| 75 |
|
| 76 |
+
## Model Inference
|
| 77 |
+
|
| 78 |
+
For running inference, PEFT must be used to load the fine-tuned model:
|
| 79 |
+
|
| 80 |
+
```
|
| 81 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 82 |
+
from peft import PeftModel, PeftConfig
|
| 83 |
+
|
| 84 |
+
config = PeftConfig.from_pretrained(self.model_id)
|
| 85 |
+
model = AutoModelForCausalLM.from_pretrained("google/codegemma-2b", device_map = self.device)
|
| 86 |
+
fine_tuned_model = PeftModel.from_pretrained(model, "documint/CodeGemma2B-fine-tuned", device_map = self.device)
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
#### Hardware
|
| 90 |
|
| 91 |
Fine-tuning was performed using an Intel 12900K CPU, an Nvidia RTX-3090 GPU, and 64 GB RAM. Total fine-tuning time was 48 GPU hours.
|