Instructions to use dataautogpt3/ProteusV0.2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use dataautogpt3/ProteusV0.2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("dataautogpt3/ProteusV0.2", dtype=torch.bfloat16, device_map="cuda") prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
This model not runinng on T4
#7
by areumtecnologia - opened
Is it that heavy?
Hi, if you have tried it on the free Colab, maybe it has run out of CPU RAM?
T4 has enough GPU RAM. It runs on Kaggle either with P100 16 GB and 2 x T4, the CPU reaches almost 16 GB which is too much for the Colab (<13 or so).
Draft Session
GPU P100 On
Session ...
RAM 15.7GB Max 29GB
GPU Memory 11.6GB Max 16GB