FoxSpirit
Collection
Next series, good result, nearly 'human response'. (~ ̄▽ ̄)~ But have a pity that sometimes occur Tokenization bugs (o′┏▽┓`o) • 3 items • Updated • 1
How to use DoppelReflEx/MN-12B-FoxFrame-Shinori with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="DoppelReflEx/MN-12B-FoxFrame-Shinori")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DoppelReflEx/MN-12B-FoxFrame-Shinori")
model = AutoModelForCausalLM.from_pretrained("DoppelReflEx/MN-12B-FoxFrame-Shinori")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use DoppelReflEx/MN-12B-FoxFrame-Shinori with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "DoppelReflEx/MN-12B-FoxFrame-Shinori"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MN-12B-FoxFrame-Shinori",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/DoppelReflEx/MN-12B-FoxFrame-Shinori
How to use DoppelReflEx/MN-12B-FoxFrame-Shinori with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "DoppelReflEx/MN-12B-FoxFrame-Shinori" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MN-12B-FoxFrame-Shinori",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "DoppelReflEx/MN-12B-FoxFrame-Shinori" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DoppelReflEx/MN-12B-FoxFrame-Shinori",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use DoppelReflEx/MN-12B-FoxFrame-Shinori with Docker Model Runner:
docker model run hf.co/DoppelReflEx/MN-12B-FoxFrame-Shinori
Version: Miyuri - Yukina - Shinori
A very nice merge series, to be real. I have test this and got the good result so far.
In my test character card, it's give me an warm, gentle, soft, tender, caring girl, yeah... You should try it too, or try any version you like most.
Good for RP,ERP.
### Models Merged
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
parameters:
density: 0.9
weight: 1
- model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
parameters:
density: 0.5
weight: 0.7
- model: crestf411/MN-Slush
parameters:
density: 0.7
weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base