Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

sangoi-exe
/
sd-webui-codex

Text-to-Image
GGUF
stable-diffusion
stable-diffusion-xl
sdxl
flux
wan22
lora
klein
ltx-2
ltx2
image-to-image
video
codex-webui
Model card Files Files and versions
xet
Community
2

Instructions to use sangoi-exe/sd-webui-codex with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • llama-cpp-python

    How to use sangoi-exe/sd-webui-codex with llama-cpp-python:

    # !pip install llama-cpp-python
    
    from llama_cpp import Llama
    
    llm = Llama.from_pretrained(
    	repo_id="sangoi-exe/sd-webui-codex",
    	filename="flux-tenc/t5xxl.gguf",
    )
    
    output = llm(
    	"Once upon a time,",
    	max_tokens=512,
    	echo=True
    )
    print(output)
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • llama.cpp

    How to use sangoi-exe/sd-webui-codex with llama.cpp:

    Install from brew
    brew install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf sangoi-exe/sd-webui-codex:Q4_K_M
    # Run inference directly in the terminal:
    llama-cli -hf sangoi-exe/sd-webui-codex:Q4_K_M
    Install from WinGet (Windows)
    winget install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf sangoi-exe/sd-webui-codex:Q4_K_M
    # Run inference directly in the terminal:
    llama-cli -hf sangoi-exe/sd-webui-codex:Q4_K_M
    Use pre-built binary
    # Download pre-built binary from:
    # https://github.com/ggerganov/llama.cpp/releases
    # Start a local OpenAI-compatible server with a web UI:
    ./llama-server -hf sangoi-exe/sd-webui-codex:Q4_K_M
    # Run inference directly in the terminal:
    ./llama-cli -hf sangoi-exe/sd-webui-codex:Q4_K_M
    Build from source code
    git clone https://github.com/ggerganov/llama.cpp.git
    cd llama.cpp
    cmake -B build
    cmake --build build -j --target llama-server llama-cli
    # Start a local OpenAI-compatible server with a web UI:
    ./build/bin/llama-server -hf sangoi-exe/sd-webui-codex:Q4_K_M
    # Run inference directly in the terminal:
    ./build/bin/llama-cli -hf sangoi-exe/sd-webui-codex:Q4_K_M
    Use Docker
    docker model run hf.co/sangoi-exe/sd-webui-codex:Q4_K_M
  • LM Studio
  • Jan
  • Ollama

    How to use sangoi-exe/sd-webui-codex with Ollama:

    ollama run hf.co/sangoi-exe/sd-webui-codex:Q4_K_M
  • Unsloth Studio new

    How to use sangoi-exe/sd-webui-codex with Unsloth Studio:

    Install Unsloth Studio (macOS, Linux, WSL)
    curl -fsSL https://unsloth.ai/install.sh | sh
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for sangoi-exe/sd-webui-codex to start chatting
    Install Unsloth Studio (Windows)
    irm https://unsloth.ai/install.ps1 | iex
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for sangoi-exe/sd-webui-codex to start chatting
    Using HuggingFace Spaces for Unsloth
    # No setup required
    # Open https://huggingface.co/spaces/unsloth/studio in your browser
    # Search for sangoi-exe/sd-webui-codex to start chatting
  • Docker Model Runner

    How to use sangoi-exe/sd-webui-codex with Docker Model Runner:

    docker model run hf.co/sangoi-exe/sd-webui-codex:Q4_K_M
  • Lemonade

    How to use sangoi-exe/sd-webui-codex with Lemonade:

    Pull the model
    # Download Lemonade from https://lemonade-server.ai/
    lemonade pull sangoi-exe/sd-webui-codex:Q4_K_M
    Run and chat with the model
    lemonade run user.sd-webui-codex-Q4_K_M
    List all available models
    lemonade list
sd-webui-codex
95 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 14 commits
sangoi-exe's picture
sangoi-exe
Upload 9 files
d388b3d verified 11 days ago
  • flux-tenc
    Upload 3 files about 2 months ago
  • flux-vae
    Upload 3 files about 2 months ago
  • flux
    Upload FLUX.1-dev-Q5_K_M-Codex.gguf 4 months ago
  • ip_adapter
    Upload 10 files about 1 month ago
  • ltx2-connectors
    Upload 9 files 11 days ago
  • ltx2-tenc
    Upload 9 files 11 days ago
  • ltx2-vae
    Upload 9 files 11 days ago
  • ltx2
    Upload 9 files 11 days ago
  • wan22-tenc
    Upload 2 files about 2 months ago
  • wan22-vae
    Upload 2 files about 2 months ago
  • wan22
    Upload 2 files 3 months ago
  • zimage-tenc
    Upload 2 files about 2 months ago
  • zimage-vae
    Upload 2 files about 2 months ago
  • zimage
    Upload 2 files 4 months ago
  • .gitattributes
    2.26 kB
    Upload 9 files 11 days ago
  • README.md
    4.33 kB
    Create README.md 3 months ago