File size: 1,442 Bytes
1389719 9ddd79a 1389719 9ddd79a 1389719 9ddd79a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | ---
license: mit
tags:
- causal_lm
- generated_from_trainer
base_model: broadfield-dev/gemma-3-270m-tuned-0102-0441
datasets:
- broadfield-dev/abisee_cnn_dailymail_concise-Broadfield
model-index:
- name: gemma-3-270m-tuned-0102-0441-tuned-0102-1157
results: []
---
# gemma-3-270m-tuned-0102-0441-tuned-0102-1157
This model is a fine-tuned version of [broadfield-dev/gemma-3-270m-tuned-0102-0441](https://huggingface.co/broadfield-dev/gemma-3-270m-tuned-0102-0441) on the [broadfield-dev/abisee_cnn_dailymail_concise-Broadfield](https://huggingface.co/broadfield-dev/abisee_cnn_dailymail_concise-Broadfield) dataset.
## Training Details
- **Task:** CAUSAL_LM
- **Epochs:** 1
- **Learning Rate:** 2e-05
- **Gradient Accumulation Steps:** 4
## Entity Labels
`['LABEL_0', 'LABEL_1']`
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "broadfield-dev/gemma-3-270m-tuned-0102-0441-tuned-0102-1157"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
messages = [
{"role": "system", "content": "Summarize this: "},
{"role": "user", "content": "Your input here..."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|