WaveCut commited on
Commit
24b949e
·
verified ·
1 Parent(s): 937ce05

Fix Diffusers class metadata warning

Browse files
Files changed (2) hide show
  1. README.md +3 -0
  2. model_index.json +1 -4
README.md CHANGED
@@ -41,6 +41,7 @@ from diffusers import DiffusionPipeline
41
 
42
  pipe = DiffusionPipeline.from_pretrained(
43
  "WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers",
 
44
  torch_dtype=torch.bfloat16,
45
  trust_remote_code=True,
46
  ).to("cuda")
@@ -59,6 +60,8 @@ image = pipe(
59
  ).images[0]
60
  ```
61
 
 
 
62
  ## Prompting
63
 
64
  Anima was trained on Danbooru-style tags, natural language captions, and mixtures of both. The upstream Anima Preview 3 card recommends about 1MP generation, for example `1024x1024`, `896x1152`, or `1152x896`, with roughly 30-50 steps and CFG 4-5.
 
41
 
42
  pipe = DiffusionPipeline.from_pretrained(
43
  "WaveCut/Anima-Preview-3-SDNQ-uint4-diffusers",
44
+ custom_pipeline="pipeline",
45
  torch_dtype=torch.bfloat16,
46
  trust_remote_code=True,
47
  ).to("cuda")
 
60
  ).images[0]
61
  ```
62
 
63
+ Because the Anima pipeline is custom code, pass `custom_pipeline="pipeline"`; `trust_remote_code=True` allows Diffusers to load `pipeline.py` from this repo.
64
+
65
  ## Prompting
66
 
67
  Anima was trained on Danbooru-style tags, natural language captions, and mixtures of both. The upstream Anima Preview 3 card recommends about 1MP generation, for example `1024x1024`, `896x1152`, or `1152x896`, with roughly 30-50 steps and CFG 4-5.
model_index.json CHANGED
@@ -1,8 +1,5 @@
1
  {
2
- "_class_name": [
3
- "pipeline",
4
- "AnimaTextToImagePipeline"
5
- ],
6
  "_diffusers_version": "0.37.0",
7
  "text_encoder": [
8
  "transformers",
 
1
  {
2
+ "_class_name": "AnimaTextToImagePipeline",
 
 
 
3
  "_diffusers_version": "0.37.0",
4
  "text_encoder": [
5
  "transformers",