File size: 949 Bytes
b177b04 cba7ae9 b177b04 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | # Gemma Model Fine-Tuned on Custom Data
## Model Description
This model is a fine-tuned version of Gemma Model on custom data. It was trained using the SFTTrainer and incorporates LoRA configurations to enhance performance.
## Training Procedure
- **Batch size**: 1
- **Gradient accumulation steps**: 4
- **Learning rate**: 2e-4
- **Warmup steps**: 2
- **Max steps**: 100
- **Optimizer**: Paged AdamW 8-bit
- **FP16**: Enabled
## Usage
You can use this model, Below is an example of how to load and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("iot/Gemma_model_fine_tune_custom_Data")
model = AutoModelForCausalLM.from_pretrained("iot/Gemma_model_fine_tune_custom_Data")
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|