Nocturne
Collection
Balance size model with good quality. • 6 items • Updated • 1
How to use DoppelReflEx/MiniusLight-24B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="DoppelReflEx/MiniusLight-24B") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DoppelReflEx/MiniusLight-24B")
model = AutoModelForCausalLM.from_pretrained("DoppelReflEx/MiniusLight-24B")How to use DoppelReflEx/MiniusLight-24B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "DoppelReflEx/MiniusLight-24B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MiniusLight-24B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/DoppelReflEx/MiniusLight-24B
How to use DoppelReflEx/MiniusLight-24B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "DoppelReflEx/MiniusLight-24B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MiniusLight-24B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "DoppelReflEx/MiniusLight-24B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MiniusLight-24B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use DoppelReflEx/MiniusLight-24B with Docker Model Runner:
docker model run hf.co/DoppelReflEx/MiniusLight-24B
A nice, simple Slerp merge of 2 Mistral "Small" model and well-known HuggingFace users, TheDrummer/Cydonia-24B-v2 & PocketDoc/Dans-PersonalityEngine-V1.2.0-24b.
This version is the best merge version and recipe I have tried with a good eval scores. Strong in ERP, RP, Story Writing and orther purpose.
Overall, nice to try model, if you want to try. :)
{
models:
- model: TheDrummer/Cydonia-24B-v2
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
merge_method: slerp
base_model: TheDrummer/Cydonia-24B-v2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
}
docker model run hf.co/DoppelReflEx/MiniusLight-24B