msgxai commited on
Commit
4302ebf
·
1 Parent(s): 557227d

chore: fix code

Browse files
Files changed (2) hide show
  1. README.md +23 -1
  2. handler.py +1 -2
README.md CHANGED
@@ -262,4 +262,26 @@ This will:
262
  1. Send a request to your endpoint
263
  2. Download the generated image
264
  3. Save it to the specified output directory
265
- 4. Display the seed used for generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
262
  1. Send a request to your endpoint
263
  2. Download the generated image
264
  3. Save it to the specified output directory
265
+ 4. Display the seed used for generation
266
+
267
+ ## Troubleshooting
268
+
269
+ ### Error: "You are trying to load the model files of the `variant=fp16`, but no such modeling files are available"
270
+
271
+ If you encounter this error when deploying your endpoint, it means the model you're trying to use doesn't have an fp16 variant explicitly available. To fix this:
272
+
273
+ 1. Open `handler.py`
274
+ 2. Find the `StableDiffusionXLPipeline.from_pretrained` call
275
+ 3. Remove the `variant="fp16"` parameter
276
+
277
+ The corrected code should look like:
278
+ ```python
279
+ pipe = StableDiffusionXLPipeline.from_pretrained(
280
+ ckpt_dir,
281
+ vae=vae,
282
+ torch_dtype=torch.float16,
283
+ use_safetensors=self.cfg.get("use_safetensors", True)
284
+ )
285
+ ```
286
+
287
+ This change allows the model to be loaded with fp16 precision without requiring a specific fp16 variant of the model weights.
handler.py CHANGED
@@ -103,8 +103,7 @@ class EndpointHandler:
103
  ckpt_dir,
104
  vae=vae,
105
  torch_dtype=torch.float16,
106
- use_safetensors=self.cfg.get("use_safetensors", True),
107
- variant="fp16"
108
  )
109
  # Move model to GPU
110
  pipe = pipe.to("cuda")
 
103
  ckpt_dir,
104
  vae=vae,
105
  torch_dtype=torch.float16,
106
+ use_safetensors=self.cfg.get("use_safetensors", True)
 
107
  )
108
  # Move model to GPU
109
  pipe = pipe.to("cuda")