Instructions to use WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Cosmos
How to use WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers with Cosmos:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Fix Diffusers class metadata warning
Browse files- README.md +3 -0
- model_index.json +1 -4
README.md
CHANGED
|
@@ -41,6 +41,7 @@ from diffusers import DiffusionPipeline
|
|
| 41 |
|
| 42 |
pipe = DiffusionPipeline.from_pretrained(
|
| 43 |
"WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers",
|
|
|
|
| 44 |
torch_dtype=torch.bfloat16,
|
| 45 |
trust_remote_code=True,
|
| 46 |
).to("cuda")
|
|
@@ -59,6 +60,8 @@ image = pipe(
|
|
| 59 |
).images[0]
|
| 60 |
```
|
| 61 |
|
|
|
|
|
|
|
| 62 |
## Prompting
|
| 63 |
|
| 64 |
Anima was trained on Danbooru-style tags, natural language captions, and mixtures of both. The upstream Anima Preview 3 card recommends about 1MP generation, for example `1024x1024`, `896x1152`, or `1152x896`, with roughly 30-50 steps and CFG 4-5.
|
|
|
|
| 41 |
|
| 42 |
pipe = DiffusionPipeline.from_pretrained(
|
| 43 |
"WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers",
|
| 44 |
+
custom_pipeline="pipeline",
|
| 45 |
torch_dtype=torch.bfloat16,
|
| 46 |
trust_remote_code=True,
|
| 47 |
).to("cuda")
|
|
|
|
| 60 |
).images[0]
|
| 61 |
```
|
| 62 |
|
| 63 |
+
Because the Anima pipeline is custom code, pass `custom_pipeline="pipeline"`; `trust_remote_code=True` allows Diffusers to load `pipeline.py` from this repo.
|
| 64 |
+
|
| 65 |
## Prompting
|
| 66 |
|
| 67 |
Anima was trained on Danbooru-style tags, natural language captions, and mixtures of both. The upstream Anima Preview 3 card recommends about 1MP generation, for example `1024x1024`, `896x1152`, or `1152x896`, with roughly 30-50 steps and CFG 4-5.
|
model_index.json
CHANGED
|
@@ -1,8 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"_class_name":
|
| 3 |
-
"pipeline",
|
| 4 |
-
"AnimaTextToImagePipeline"
|
| 5 |
-
],
|
| 6 |
"_diffusers_version": "0.37.0",
|
| 7 |
"text_encoder": [
|
| 8 |
"transformers",
|
|
|
|
| 1 |
{
|
| 2 |
+
"_class_name": "AnimaTextToImagePipeline",
|
|
|
|
|
|
|
|
|
|
| 3 |
"_diffusers_version": "0.37.0",
|
| 4 |
"text_encoder": [
|
| 5 |
"transformers",
|