Instructions to use LGxNDs/Geeked-Out-Quantization-Software with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use LGxNDs/Geeked-Out-Quantization-Software with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="LGxNDs/Geeked-Out-Quantization-Software", filename="Qwen3.6-GeekedOutAi-35B-A3B-BF16-IQ2_M-00001-of-00002.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use LGxNDs/Geeked-Out-Quantization-Software with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M # Run inference directly in the terminal: llama-cli -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M # Run inference directly in the terminal: llama-cli -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M # Run inference directly in the terminal: ./llama-cli -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Use Docker
docker model run hf.co/LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
- LM Studio
- Jan
- Ollama
How to use LGxNDs/Geeked-Out-Quantization-Software with Ollama:
ollama run hf.co/LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
- Unsloth Studio new
How to use LGxNDs/Geeked-Out-Quantization-Software with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LGxNDs/Geeked-Out-Quantization-Software to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LGxNDs/Geeked-Out-Quantization-Software to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for LGxNDs/Geeked-Out-Quantization-Software to start chatting
- Pi new
How to use LGxNDs/Geeked-Out-Quantization-Software with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "LGxNDs/Geeked-Out-Quantization-Software:IQ2_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use LGxNDs/Geeked-Out-Quantization-Software with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Run Hermes
hermes
- Docker Model Runner
How to use LGxNDs/Geeked-Out-Quantization-Software with Docker Model Runner:
docker model run hf.co/LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
- Lemonade
How to use LGxNDs/Geeked-Out-Quantization-Software with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull LGxNDs/Geeked-Out-Quantization-Software:IQ2_M
Run and chat with the model
lemonade run user.Geeked-Out-Quantization-Software-IQ2_M
List all available models
lemonade list
File size: 2,768 Bytes
4e3e307 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | CALIBRATION DATA INFORMATION ============================= This model was quantized using importance matrix (imatrix) generation. The imatrix captures which weights in the model are most important for maintaining output quality during extreme compression (2-bit quantization). WHAT IS CALIBRATION? -------------------- Calibration is the process of running sample inputs through the model to measure which tensors (weight matrices) contribute most to the output. These measurements create an "importance matrix" that guides the quantizer to preserve precision where it matters most. CALIBRATION DATA CHARACTERISTICS -------------------------------- Good calibration data should be: 1. REPRESENTATIVE - Matches the domain the model will operate in - Similar vocabulary and complexity to expected inputs - Reflects actual use case scenarios 2. DIVERSE - Multiple topics, subjects, and writing styles - Mix of common and rare tokens - Varied sentence structures and lengths 3. SUFFICIENT - 100-500 text chunks of typical document length - More chunks = better quality (diminishing returns beyond ~500) - Each chunk processed independently 4. NATURAL - Real-world text (not synthetic or random) - Domain-appropriate (code for code models, medical for medical models) - Representative token distribution CALIBRATION PROCESS PARAMETERS ------------------------------ Typical settings for this quantization: Chunks Processed: 200-500 (production quality) Chunk Size: Typical document/paragraph length GPU Acceleration: Enabled (99 layers offloaded) Thread Count: Auto-detected based on CPU QUALITY IMPACT -------------- The importance matrix generated from quality calibration data enables: - 3-8% perplexity increase (vs 10-20% without imatrix) - Preservation of critical weights - Intelligent bit allocation per tensor - 16x compression with minimal quality loss CALIBRATION DATA SOURCES ------------------------ Common sources for high-quality calibration data: - Wikitext-2-raw (general language models) - Domain-specific corpora (medical, legal, code) - The Pile subset (diverse web text) - Custom curated datasets matching expected use VERIFICATION ------------ Quantized models are tested for: ✓ Perplexity measurement vs baseline ✓ Sample inference quality ✓ Token prediction accuracy ✓ Model file integrity NOTES ----- - Calibration is performed once per source model - Same imatrix can be reused for different target formats - Domain-specific calibration yields better results - GPU acceleration significantly speeds up generation For questions about the calibration methodology used for this model, please open a discussion on the model's Hugging Face page. |