File size: 1,495 Bytes
a7e0aaa
8ce83ab
 
 
a7e0aaa
8ce83ab
 
 
 
 
a7e0aaa
 
8ce83ab
a7e0aaa
8ce83ab
 
a7e0aaa
8ce83ab
a7e0aaa
8ce83ab
 
a7e0aaa
8ce83ab
 
 
 
 
a7e0aaa
8ce83ab
a7e0aaa
8ce83ab
a7e0aaa
 
8ce83ab
a7e0aaa
8ce83ab
a7e0aaa
8ce83ab
 
 
 
 
a7e0aaa
8ce83ab
a7e0aaa
36ff542
 
8ce83ab
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---

# Model Card for outputs

This model is a fine-tuned version of [unsloth/gemma-3-27b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).

## Quick start

```python
from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="batoulnn/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```

## Training procedure

 


This model was trained with SFT.

### Framework versions

- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2

## Citations



Cite TRL as:
    
```bibtex
@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}
```