Instructions to use Baptlem/UCDR-Net_models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Baptlem/UCDR-Net_models with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Baptlem/UCDR-Net_models", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update coyo2M-bridge325k/readme.md
Browse files
coyo2M-bridge325k/readme.md
CHANGED
|
@@ -1,3 +1,5 @@
|
|
| 1 |
Repo for controlnet model trained on 3.5M samples from coyo-700M dataset and 2.6M samples from bridge dataset
|
| 2 |
For each training step, the batch is composed of 28 images from coyo and 4 from Bridge.
|
| 3 |
-
We created a custom DataLoader that would load the batch with this 28:4 ratio each step from randomly selected images in both dataset.
|
|
|
|
|
|
|
|
|
| 1 |
Repo for controlnet model trained on 3.5M samples from coyo-700M dataset and 2.6M samples from bridge dataset
|
| 2 |
For each training step, the batch is composed of 28 images from coyo and 4 from Bridge.
|
| 3 |
+
We created a custom DataLoader that would load the batch with this 28:4 ratio each step from randomly selected images in both dataset.
|
| 4 |
+
Therefore, we didn't trained for a certain number of epoch but for a certain number of step.
|
| 5 |
+
If we consider that an epoch correspond to the process of 2.6M images, then we processed 2.275M images from coyo and 325k images from bridge per epoch.
|