Text Generation
Transformers
Safetensors
MLX
llama
mlx-my-repo
text-generation-inference
4-bit precision
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "minpeter/tiny-ko-random-mlx-4Bit" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "minpeter/tiny-ko-random-mlx-4Bit",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'Quick Links
minpeter/tiny-ko-random-mlx-4Bit
The Model minpeter/tiny-ko-random-mlx-4Bit was converted to MLX format from minpeter/tiny-ko-random using mlx-lm version 0.22.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("minpeter/tiny-ko-random-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 33
Model size
31.1M params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for minpeter/tiny-ko-random-mlx-4Bit
Base model
minpeter/tiny-ko-random
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "minpeter/tiny-ko-random-mlx-4Bit" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "minpeter/tiny-ko-random-mlx-4Bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'