Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Tech-Meld
/
gpus-everywhere

Text-to-Image
Diffusers
stable-diffusion
lora
template:sd-lora
migrated
concept
rtx
gpu
amd
nvidia
gtx
Model card Files Files and versions
xet
Community

Instructions to use Tech-Meld/gpus-everywhere with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use Tech-Meld/gpus-everywhere with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", dtype=torch.bfloat16, device_map="cuda")
    pipe.load_lora_weights("Tech-Meld/gpus-everywhere")
    
    prompt = " "
    image = pipe(prompt).images[0]
  • Inference
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • Draw Things
  • DiffusionBee
gpus-everywhere
228 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
Tech-Meld's picture
Tech-Meld
Upload folder using huggingface_hub
2984acc verified almost 2 years ago
  • .gitattributes
    1.52 kB
    initial commit almost 2 years ago
  • 17295193.jpeg
    19.6 kB
    Upload folder using huggingface_hub almost 2 years ago
  • GPUs_Everywhere.safetensors
    228 MB
    xet
    Upload folder using huggingface_hub almost 2 years ago
  • README.md
    1.54 kB
    Upload folder using huggingface_hub almost 2 years ago