skdrx commited on
Commit
6963f5a
·
verified ·
1 Parent(s): 0e727a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: Qwen/Qwen3-1.7B-Base
3
  library_name: transformers
4
- model_name: qwen3-sft-output
5
  tags:
6
  - generated_from_trainer
7
  - sft
@@ -9,19 +9,23 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for qwen3-sft-output
13
 
14
  This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
 
 
 
 
17
  ## Quick start
18
 
19
  ```python
20
  from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
  generator = pipeline("text-generation", model="None", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
 
1
  ---
2
  base_model: Qwen/Qwen3-1.7B-Base
3
  library_name: transformers
4
+ model_name: dracula-flow-base
5
  tags:
6
  - generated_from_trainer
7
  - sft
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for dracula-flow-base
13
 
14
  This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ This has been specifically trained on Dracula flow 1-5 for 6 epochs to get it to be semi decent enough at writing dracula flow bars. It is not advised to use it as an actual dracula flow generator but rather as a generator for synthetic data to then train the real dracula flow model.
18
+
19
+
20
+
21
  ## Quick start
22
 
23
  ```python
24
  from transformers import pipeline
25
 
26
+ prompt = "[dracula flow]: "
27
  generator = pipeline("text-generation", model="None", device="cuda")
28
+ output = generator(prompt, max_new_tokens=128, return_full_text=False)[0]
29
  print(output["generated_text"])
30
  ```
31