Instructions to use david-ar/20q-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use david-ar/20q-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="david-ar/20q-GGUF", filename="twentyq-f16.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use david-ar/20q-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf david-ar/20q-GGUF:F16 # Run inference directly in the terminal: llama-cli -hf david-ar/20q-GGUF:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf david-ar/20q-GGUF:F16 # Run inference directly in the terminal: llama-cli -hf david-ar/20q-GGUF:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf david-ar/20q-GGUF:F16 # Run inference directly in the terminal: ./llama-cli -hf david-ar/20q-GGUF:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf david-ar/20q-GGUF:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf david-ar/20q-GGUF:F16
Use Docker
docker model run hf.co/david-ar/20q-GGUF:F16
- LM Studio
- Jan
- Ollama
How to use david-ar/20q-GGUF with Ollama:
ollama run hf.co/david-ar/20q-GGUF:F16
- Unsloth Studio new
How to use david-ar/20q-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for david-ar/20q-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for david-ar/20q-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for david-ar/20q-GGUF to start chatting
- Docker Model Runner
How to use david-ar/20q-GGUF with Docker Model Runner:
docker model run hf.co/david-ar/20q-GGUF:F16
- Lemonade
How to use david-ar/20q-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull david-ar/20q-GGUF:F16
Run and chat with the model
lemonade run user.20q-GGUF-F16
List all available models
lemonade list
TwentyQ โ GGUF
GGUF quantized versions of david-ar/20q, the world's smallest chat model.
This model was natively trained at 2-bit precision. All quantization levels above Q2_K are technically upscaled. Q2_K is the model's native precision.
Available Quantizations
| File | Quant | Size | Quality Loss |
|---|---|---|---|
| twentyq-f32.gguf | F32 | 762 KB | 0% |
| twentyq-f16.gguf | F16 | 397 KB | 0% |
| twentyq-q8_0.gguf | Q8_0 | 228 KB | 0% |
| twentyq-q4_0.gguf | Q4_0 | 135 KB | 0% |
| twentyq-q2_k.gguf | Q2_K | 95 KB | 0% |
All quantizations are lossless because the original weights are 2-bit integers (values 0-3). Q2_K is the only quantization level that doesn't waste bits.
Architecture
general.architecture: twentyq
twentyq.block_count: 0
twentyq.embedding_length: 156
twentyq.attention.head_count: 156
twentyq.context_length: 20
twentyq.vocab_size: 1200
Zero transformer blocks. 156 attention heads. 20-token context window. The output projection layer (output.weight) contains the entire model.
Compatibility
These files require a runtime with twentyq architecture support, which does not currently exist in llama.cpp, ollama, or any other GGUF runtime. For inference, use the original model via the transformers library, or the live demo.
- Downloads last month
- 9
2-bit
4-bit
8-bit
16-bit
32-bit
Model tree for david-ar/20q-GGUF
Base model
david-ar/20q