File size: 1,484 Bytes
cd4ca0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b80cdf
cd4ca0c
 
 
3b80cdf
 
 
cd4ca0c
 
 
3b80cdf
 
 
 
 
cd4ca0c
 
 
 
3b80cdf
 
cd4ca0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: Qilex/private_guys
metrics: []
---


# VirtualPetDiffusion2

## Model description

This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library 
on a dataset of roughly 8,000 virtual pet thumbnail images.

## Intended uses & limitations

This model can be used to generate small (128x128) virtual pet-like thumbnails.
The pets are generally somewhat abstract.

#### How to use

```python
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("Qilex/VirtualPetDiffusion2")
image = pipeline()["sample"][0]
#this line only works in jupyter
display(image)
```

## Training data

This model was trained on roughly 8,000 virtual pet thumbnail images (80x80px).
The data was randomly flipped, rotated, and perspected using torchvision transforms to prevent some of the issues from the first VirtualPetDiffusion.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no

### Training results

📈 [TensorBoard logs](https://huggingface.co/Qilex/VirtualPetDiffusion2/tensorboard?#scalars)