waitwhoami/vqa_caption.dataset-full
Viewer • Updated • 95.1k • 105
How to use SushantGautam/kandi2-decoder-medical-model with Diffusers:
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("SushantGautam/kandi2-decoder-medical-model", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("SushantGautam/kandi2-decoder-medical-model", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This pipeline was finetuned from kandinsky-community/kandinsky-2-2-decoder on the waitwhoami/vqa_caption.dataset-full dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['The colonoscopy image contains a single, moderate-sized polyp that has not been removed, appearing in red and pink tones in the center and lower areas.']:
You can use the pipeline like so:
from diffusers import DiffusionPipeline
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("SushantGautam/kandi2-decoder-medical-model", torch_dtype=torch.float16)
prompt = "The colonoscopy image contains a single, moderate-sized polyp that has not been removed, appearing in red and pink tones in the center and lower areas."
image = pipeline(prompt).images[0]
image.save("my_image.png")
These are the key hyperparameters used during training:
More information on all the CLI arguments and the environment are available on your wandb run page.
Base model
kandinsky-community/kandinsky-2-2-decoder