pszemraj commited on
Commit
fd9f0dc
·
1 Parent(s): 25e7817

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -8
README.md CHANGED
@@ -7,28 +7,43 @@ datasets:
7
  model-index:
8
  - name: distilgpt2-multiprompt-v2-fp
9
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
- # distilgpt2-multiprompt-v2-fp
16
 
17
  This model is a fine-tuned version of [pszemraj/distilgpt2-multiprompt-v1](https://huggingface.co/pszemraj/distilgpt2-multiprompt-v1) on the pszemraj/text2image-prompts-multi dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 2.0213
 
20
 
21
- ## Model description
22
-
23
- More information needed
24
 
25
  ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
  ## Training and evaluation data
30
 
31
- More information needed
32
 
33
  ## Training procedure
34
 
 
7
  model-index:
8
  - name: distilgpt2-multiprompt-v2-fp
9
  results: []
10
+ widget:
11
+ - text: "morning sun over Jakarta"
12
+ example_title: "morning sun"
13
+ - text: "WARNING: pip is"
14
+ example_title: "pip"
15
+ - text: "sentient cheese"
16
+ example_title: "sentient cheese"
17
+ - text: "cheeps are"
18
+ example_title: "cheeps"
19
+ - text: "avocado armchair"
20
+ example_title: "creative prompt"
21
+ - text: "Landscape of"
22
+ example_title: "landscape"
23
+ parameters:
24
+ min_length: 16
25
+ max_length: 96
26
+ no_repeat_ngram_size: 1
27
+ do_sample: True
28
  ---
29
 
30
+ # distilgpt2-multiprompt
 
31
 
32
+ Generate/augment your prompt with a model trained on a large & diverse prompt dataset.
33
 
34
  This model is a fine-tuned version of [pszemraj/distilgpt2-multiprompt-v1](https://huggingface.co/pszemraj/distilgpt2-multiprompt-v1) on the pszemraj/text2image-prompts-multi dataset.
35
  It achieves the following results on the evaluation set:
36
  - Loss: 2.0213
37
+ - perplexity = 7.55
38
 
 
 
 
39
 
40
  ## Intended uses & limitations
41
 
42
+ - The model will generate augmentations that are biased towards the training data, i.e. what people already asked for in the SD/midjourney discords, etc. Creating a larger dataset was an attempt at mitigating this through more data from different datasets.
43
 
44
  ## Training and evaluation data
45
 
46
+ See the `pszemraj/text2image-prompts-multi` dataset card for details. The dataset is a compilation of several text-to-image prompt datasets on huggingface :)
47
 
48
  ## Training procedure
49