| pipeline_tag: text-generation | |
| library_name: transformers | |
| license: apache-2.0 | |
| base_model: google/t5gemma-s-s-ul2-it | |
| model_type: t5gemma | |
| # T5Gemma Fine-tuned Model | |
| This is a fine-tuned T5Gemma model for text-to-text generation tasks. | |
| ## Model Details | |
| - **Base Model**: google/t5gemma-s-s-ul2-it | |
| - **Architecture**: T5GemmaForConditionalGeneration | |
| - **Task**: Text-to-text generation | |
| - **Framework**: Transformers | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSeq2SeqLM | |
| tokenizer = AutoTokenizer.from_pretrained("your-username/model-name") | |
| model = AutoModelForSeq2SeqLM.from_pretrained("your-username/model-name") | |
| # Use with chat template | |
| messages = [{"role": "user", "content": "Your input text here"}] | |
| input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") | |
| outputs = model.generate(input_ids, max_new_tokens=1024, temperature=0.1, do_sample=True) | |
| response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| ``` | |