Instructions to use dataautogpt3/OpenDalleV1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use dataautogpt3/OpenDalleV1.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("dataautogpt3/OpenDalleV1.1", dtype=torch.bfloat16, device_map="cuda") prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
After a system upgrade I'm getting an annoying error with this model (Proteus works fine).
#35
by nyyotam - opened
The error message I'm receiving is:
ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.
Now, as I am using Gentoo and I cannot really keep track of all the changes an emerge --update @world does, most likely the problem is with my system and not the model. Can anyone here point me to a possible culprit in my system that could cause this? Or is it the model after all? I mean, Proteus works.