--- license: mit tags: - causal_lm - generated_from_trainer base_model: broadfield-dev/gemma-3-270m-tuned-0102-0441 datasets: - broadfield-dev/abisee_cnn_dailymail_concise-Broadfield model-index: - name: gemma-3-270m-tuned-0102-0441-tuned-0102-1157 results: [] --- # gemma-3-270m-tuned-0102-0441-tuned-0102-1157 This model is a fine-tuned version of [broadfield-dev/gemma-3-270m-tuned-0102-0441](https://huggingface.co/broadfield-dev/gemma-3-270m-tuned-0102-0441) on the [broadfield-dev/abisee_cnn_dailymail_concise-Broadfield](https://huggingface.co/broadfield-dev/abisee_cnn_dailymail_concise-Broadfield) dataset. ## Training Details - **Task:** CAUSAL_LM - **Epochs:** 1 - **Learning Rate:** 2e-05 - **Gradient Accumulation Steps:** 4 ## Entity Labels `['LABEL_0', 'LABEL_1']` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "broadfield-dev/gemma-3-270m-tuned-0102-0441-tuned-0102-1157" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16) messages = [ {"role": "system", "content": "Summarize this: "}, {"role": "user", "content": "Your input here..."} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device) outputs = model.generate(inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```