Instructions to use oxyapi/oxy-1-small-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use oxyapi/oxy-1-small-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="oxyapi/oxy-1-small-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("oxyapi/oxy-1-small-GGUF", dtype="auto") - llama-cpp-python
How to use oxyapi/oxy-1-small-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="oxyapi/oxy-1-small-GGUF", filename="oxy-1-small.F16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use oxyapi/oxy-1-small-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf oxyapi/oxy-1-small-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf oxyapi/oxy-1-small-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf oxyapi/oxy-1-small-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf oxyapi/oxy-1-small-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf oxyapi/oxy-1-small-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf oxyapi/oxy-1-small-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf oxyapi/oxy-1-small-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf oxyapi/oxy-1-small-GGUF:Q4_K_M
Use Docker
docker model run hf.co/oxyapi/oxy-1-small-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use oxyapi/oxy-1-small-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "oxyapi/oxy-1-small-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "oxyapi/oxy-1-small-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/oxyapi/oxy-1-small-GGUF:Q4_K_M
- SGLang
How to use oxyapi/oxy-1-small-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "oxyapi/oxy-1-small-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "oxyapi/oxy-1-small-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "oxyapi/oxy-1-small-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "oxyapi/oxy-1-small-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use oxyapi/oxy-1-small-GGUF with Ollama:
ollama run hf.co/oxyapi/oxy-1-small-GGUF:Q4_K_M
- Unsloth Studio new
How to use oxyapi/oxy-1-small-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for oxyapi/oxy-1-small-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for oxyapi/oxy-1-small-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for oxyapi/oxy-1-small-GGUF to start chatting
- Docker Model Runner
How to use oxyapi/oxy-1-small-GGUF with Docker Model Runner:
docker model run hf.co/oxyapi/oxy-1-small-GGUF:Q4_K_M
- Lemonade
How to use oxyapi/oxy-1-small-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull oxyapi/oxy-1-small-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.oxy-1-small-GGUF-Q4_K_M
List all available models
lemonade list
Upload folder using huggingface_hub
#2
by TornadoAI - opened
- .gitattributes +12 -0
- oxy-1-small.IQ4_XS.gguf +3 -0
- oxy-1-small.Q2_K.gguf +3 -0
- oxy-1-small.Q3_K_L.gguf +3 -0
- oxy-1-small.Q3_K_M.gguf +3 -0
- oxy-1-small.Q3_K_S.gguf +3 -0
- oxy-1-small.Q4_0.gguf +3 -0
- oxy-1-small.Q4_K_M.gguf +3 -0
- oxy-1-small.Q4_K_S.gguf +3 -0
- oxy-1-small.Q5_K_M.gguf +3 -0
- oxy-1-small.Q5_K_S.gguf +3 -0
- oxy-1-small.Q6_K.gguf +3 -0
- oxy-1-small.Q8_0.gguf +3 -0
.gitattributes
CHANGED
|
@@ -34,3 +34,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
oxy-1-small.F16.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
oxy-1-small.F16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
oxy-1-small.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
oxy-1-small.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
oxy-1-small.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
oxy-1-small.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
oxy-1-small.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
oxy-1-small.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
oxy-1-small.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
oxy-1-small.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
oxy-1-small.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
oxy-1-small.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
oxy-1-small.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
oxy-1-small.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
oxy-1-small.IQ4_XS.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b2fa1ce9d4d8d0be54068c482f05b6487868f1142199e91807beea6c6bbcc57b
|
| 3 |
+
size 8186194016
|
oxy-1-small.Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54da2bd570f451b1e527a404e8d185ae6707f88a67fde9fbac41b4d6706e3db9
|
| 3 |
+
size 5770496096
|
oxy-1-small.Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:42fe1c1776db9382e0873894eddffa511f0f2f9e92d93c8253ecbf4a0159e376
|
| 3 |
+
size 7924766816
|
oxy-1-small.Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c2800acbee6d128b3bc8591c2359f271877b7ef5fd97d7dd1c6ebc413513870
|
| 3 |
+
size 7339202656
|
oxy-1-small.Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:86128def72d9709fcee9b2cefc06637f3db96d057ff76b917d4568cb21206a52
|
| 3 |
+
size 6659594336
|
oxy-1-small.Q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:be2095e3c071d3229b872c727922448848421852291df73bdc52f74770670f27
|
| 3 |
+
size 8517724256
|
oxy-1-small.Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b99932b76184902142fc05b61dc24167ef1749fd81997da361f7272c5345c000
|
| 3 |
+
size 8988108896
|
oxy-1-small.Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fdde7a27322b652cc700e58bb903291dd752794c504a469c89ee6797daae4e67
|
| 3 |
+
size 8573429856
|
oxy-1-small.Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:16f692931c37e4e16cfa79df157ea893ed62cfec0e0104e0b12c01dfd85c786b
|
| 3 |
+
size 10508871776
|
oxy-1-small.Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dd76872c912669262ac8d54e1d7437b4f55ef896dda47b3a02fd232db035c9fe
|
| 3 |
+
size 10266552416
|
oxy-1-small.Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e7f0ac1ac600826cba60183d5128c9ba38d9d62719052674866aba4ae6406705
|
| 3 |
+
size 12124682336
|
oxy-1-small.Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:afa6d632d760ce240703c49d24de82a0a0f6df99d970e0da18ee12b32c46243d
|
| 3 |
+
size 15701596256
|