Upload fine-tuned model from Gemma Garage (request: 43a3a2fd-ada0-40f1-9a29-9f4050d94bcf)
32b5eaa
verified
| language: en | |
| license: apache-2.0 | |
| tags: | |
| - fine-tuned | |
| - gemma | |
| - lora | |
| - gemma-garage | |
| base_model: google/gemma-3-1b-pt | |
| pipeline_tag: text-generation | |
| # test-4 | |
| Fine-tuned google/gemma-3-1b-pt model from Gemma Garage | |
| This model contains **LoRA adapters** fine-tuned using [Gemma Garage](https://github.com/your-repo/gemma-garage), a platform for fine-tuning Gemma models with LoRA. | |
| ## Model Details | |
| - **Base Model**: google/gemma-3-1b-pt | |
| - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) | |
| - **Training Platform**: Gemma Garage | |
| - **Fine-tuned on**: 2025-07-26 | |
| - **Model Type**: LoRA Adapters (not merged) | |
| ## Usage | |
| ### Option 1: Load with PEFT (Recommended) | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| from peft import PeftModel | |
| # Load base model | |
| base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-pt") | |
| tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/test-4") | |
| # Load and apply LoRA adapters | |
| model = PeftModel.from_pretrained(base_model, "LucasFMartins/test-4") | |
| # Generate text | |
| inputs = tokenizer("Your prompt here", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=100) | |
| response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| print(response) | |
| ``` | |
| ### Option 2: Merge and Load | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| from peft import PeftModel | |
| # Load base model | |
| base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-pt") | |
| tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/test-4") | |
| # Load and merge LoRA adapters | |
| model = PeftModel.from_pretrained(base_model, "LucasFMartins/test-4") | |
| model = model.merge_and_unload() | |
| # Generate text | |
| inputs = tokenizer("Your prompt here", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=100) | |
| response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| print(response) | |
| ``` | |
| ## Training Details | |
| This model was fine-tuned using the Gemma Garage platform with the following configuration: | |
| - Request ID: 43a3a2fd-ada0-40f1-9a29-9f4050d94bcf | |
| - Training completed on: 2025-07-26 18:53:46 UTC | |
| For more information about Gemma Garage, visit [our GitHub repository](https://github.com/your-repo/gemma-garage). | |