File size: 1,580 Bytes
0e727a9 6963f5a 0e727a9 6963f5a 0e727a9 6963f5a 0e727a9 6963f5a 0e727a9 6963f5a 0e727a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: dracula-flow-base
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for dracula-flow-base
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
This has been specifically trained on Dracula flow 1-5 for 6 epochs to get it to be semi decent enough at writing dracula flow bars. It is not advised to use it as an actual dracula flow generator but rather as a generator for synthetic data to then train the real dracula flow model.
## Quick start
```python
from transformers import pipeline
prompt = "[dracula flow]: "
generator = pipeline("text-generation", model="None", device="cuda")
output = generator(prompt, max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.26.2
- Transformers: 4.57.3
- Pytorch: 2.7.0
- Datasets: 4.4.2
- Tokenizers: 0.22.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |