recoilme commited on
Commit
cf5bfcc
·
1 Parent(s): aa2eabf
Files changed (1) hide show
  1. README.md +28 -33
README.md CHANGED
@@ -7,20 +7,40 @@ pipeline_tag: text-to-image
7
 
8
  *XS Size, Excess Quality*
9
 
10
- At AiArtLab, we strive to create a compact (1.7b) and fast (3 sec/image) model that can be trained on consumer graphics cards with a limited budget.
11
 
12
- - We use U-Net for its ability to efficiently handle small datasets and train quickly on GPUs with 16GB of memory.
13
- - We have chosen the multilingual/multimodal encoder Mexma-SigLIP, which supports 80 languages and processes sentences rather than individual tokens.
14
- - We use the AuraDiffusion 16ch-VAE architecture, which preserves details and anatomy without the "haze" effect.
15
- - For training, we have chosen AdamW-8bit, which allows for larger batch sizes and accelerates training on low-cost GPUs.
16
- - The model was trained on approximately 1 million images with various resolutions and styles, including anime and realistic photos.
17
- - Various annotation methods were used, including both manual and automated approaches.
18
 
19
  ### Model Limitations:
20
  - Limited concept coverage due to the small dataset.
21
  - The Image2Image functionality requires further training.
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
 
24
 
25
  Train status, in progress: [wandb](https://wandb.ai/recoilme/unet)
26
 
@@ -138,29 +158,4 @@ if __name__ == "__main__":
138
  image.save(f"{output_folder}/{project_name}_{idx}.jpg")
139
 
140
  print("Images generated and saved to:", output_folder)
141
- ```
142
-
143
- ## Acknowledgments
144
- - **[Stan](https://t.me/Stangle)** — Key investor. Primary financial support - thank you for believing in us when others called it madness.
145
- - **Captainsaturnus** — Material support.
146
- - **Love. Death. Transformers.** — Material support.
147
- - **Lovescape** & **Whargarbl** — Moral support.
148
- - **[CaptionEmporium](https://huggingface.co/CaptionEmporium)** — Datasets.
149
-
150
- > "We believe the future lies in efficient, compact models. We are grateful for the donations and hope for your continued support."
151
-
152
- ## Training budget
153
-
154
- Around ~$1k for now, research budget ~$10k
155
-
156
- ## Donations
157
-
158
- Please contact with us if you may provide some GPU's or money on training
159
-
160
- DOGE: DEw2DR8C7BnF8GgcrfTzUjSnGkuMeJhg83
161
-
162
- BTC: 3JHv9Hb8kEW8zMAccdgCdZGfrHeMhH1rpN
163
-
164
- ## Contacts
165
-
166
- [recoilme](https://t.me/recoilme)
 
7
 
8
  *XS Size, Excess Quality*
9
 
10
+ At AiArtLab, we strive to create a free, compact (1.7b) and fast (3 sec/image) model that can be trained on consumer graphics cards.
11
 
12
+ - We use U-Net for its high efficiency.
13
+ - We have chosen the multilingual/multimodal encoder Mexma-SigLIP, which supports 80 languages.
14
+ - We use the AuraDiffusion 16ch-VAE architecture, which preserves details and anatomy.
15
+ - The model was trained (~1 month on 4xA5000) on approximately 1 million images with various resolutions and styles, including anime and realistic photos.
 
 
16
 
17
  ### Model Limitations:
18
  - Limited concept coverage due to the small dataset.
19
  - The Image2Image functionality requires further training.
20
 
21
+ ## Acknowledgments
22
+ - **[Stan](https://t.me/Stangle)** — Key investor. Thank you for believing in us when others called it madness.
23
+ - **Captainsaturnus**
24
+ - **Love. Death. Transformers.**
25
+
26
+ ## Datasets
27
+ - **[CaptionEmporium](https://huggingface.co/CaptionEmporium)**
28
+
29
+ ## Training budget
30
+
31
+ Around ~$1k for now, but research budget ~$10k
32
+
33
+ ## Donations
34
+
35
+ Please contact with us if you may provide some GPU's or money on training
36
+
37
+ DOGE: DEw2DR8C7BnF8GgcrfTzUjSnGkuMeJhg83
38
+
39
+ BTC: 3JHv9Hb8kEW8zMAccdgCdZGfrHeMhH1rpN
40
+
41
+ ## Contacts
42
 
43
+ [recoilme](https://t.me/recoilme)
44
 
45
  Train status, in progress: [wandb](https://wandb.ai/recoilme/unet)
46
 
 
158
  image.save(f"{output_folder}/{project_name}_{idx}.jpg")
159
 
160
  print("Images generated and saved to:", output_folder)
161
+ ```