Instructions to use Vortex5/Dark-Quill-12B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Vortex5/Dark-Quill-12B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Vortex5/Dark-Quill-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Vortex5/Dark-Quill-12B") model = AutoModelForCausalLM.from_pretrained("Vortex5/Dark-Quill-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Vortex5/Dark-Quill-12B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Vortex5/Dark-Quill-12B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Vortex5/Dark-Quill-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Vortex5/Dark-Quill-12B
- SGLang
How to use Vortex5/Dark-Quill-12B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Vortex5/Dark-Quill-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Vortex5/Dark-Quill-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Vortex5/Dark-Quill-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Vortex5/Dark-Quill-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Vortex5/Dark-Quill-12B with Docker Model Runner:
docker model run hf.co/Vortex5/Dark-Quill-12B
Promising
I usually don't have the best results with the sub 20B models and the quill ones had felt off before. But this one seems very promising.
Aside from getting a random block in Russian, the output feels more natural and flows closer to a 24B-32B model. Though this is very early testing. I will get back after I've used this a couple days and pushed it.
So played with it a bit more. It doesn't like to separate paragraphs as much, prefers to let you lead and sometimes will just re-write what you wrote vs continuing; Characters often have little to no agency and act a bit like puppy dogs following you around.
In one-on-one RP's or the like it is sufficient (or something basic like freeuse), even had it work fine with two characters keeping track of details. But with zero agency you have to practically lead it by the nose.
It does really well with background descriptions of things, so it probably would be better for re-writing sections and adding filler details (so maybe adding background fluff to pad things out). I'd guess that's far more where this model would shine than with RPing.
I saw some duplication of output, though not as bad as some models.
All in all if you aren't doing heavy RPing with high expectations, feels like a good model to play with.