| # Gemma Model Fine-Tuned on Custom Data | |
| ## Model Description | |
| This model is a fine-tuned version of Gemma Model on custom data. It was trained using the SFTTrainer and incorporates LoRA configurations to enhance performance. | |
| ## Training Procedure | |
| - **Batch size**: 1 | |
| - **Gradient accumulation steps**: 4 | |
| - **Learning rate**: 2e-4 | |
| - **Warmup steps**: 2 | |
| - **Max steps**: 100 | |
| - **Optimizer**: Paged AdamW 8-bit | |
| - **FP16**: Enabled | |
| ## Usage | |
| You can use this model, Below is an example of how to load and use the model: | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| tokenizer = AutoTokenizer.from_pretrained("iot/Gemma_model_fine_tune_custom_Data") | |
| model = AutoModelForCausalLM.from_pretrained("iot/Gemma_model_fine_tune_custom_Data") | |
| input_text = "Your input text here" | |
| inputs = tokenizer(input_text, return_tensors="pt") | |
| outputs = model.generate(**inputs) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |