Instructions to use Sorawiz/MistralCreative-24B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Sorawiz/MistralCreative-24B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sorawiz/MistralCreative-24B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Sorawiz/MistralCreative-24B-Instruct") model = AutoModelForCausalLM.from_pretrained("Sorawiz/MistralCreative-24B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Sorawiz/MistralCreative-24B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sorawiz/MistralCreative-24B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sorawiz/MistralCreative-24B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Sorawiz/MistralCreative-24B-Instruct
- SGLang
How to use Sorawiz/MistralCreative-24B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Sorawiz/MistralCreative-24B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sorawiz/MistralCreative-24B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Sorawiz/MistralCreative-24B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sorawiz/MistralCreative-24B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Sorawiz/MistralCreative-24B-Instruct with Docker Model Runner:
docker model run hf.co/Sorawiz/MistralCreative-24B-Instruct
In the right direction
Hi! Thank you for this model, it's definitely better than the base Mistral 3.1 in creative writing (tested with Mistral's template). Alas, it still generates quite a lot of "slop", but it's a move in the right direction!
Hi! Thank you for this model, it's definitely better than the base Mistral 3.1 in creative writing (tested with Mistral's template). Alas, it still generates quite a lot of "slop", but it's a move in the right direction!
Thanks a lot for the feedback! I'm still learning and experimenting, so it's great to hear it's a step up from the base. If you have any suggestions, I'd love to hear them.
Thanks a lot for the feedback! I'm still learning and experimenting, so it's great to hear it's a step up from the base. If you have any suggestions, I'd love to hear them.
I'll be as unoriginal as it can be: the slop ("dimly lit" and other similar phrases/words) is quite annoying. Mistral 3/3.1 are good at making instructions for themselves, but even those instruction help only that much.
You can test the slop using EQ's Creative Benchmark samples, they are more or less universal. Plus, it's easy to expand them.
Tested a bit more and can say that it's a very good model:
- Prompt adherence seems to be better than the original model
- Retains quality of writing and understanding the context
- Less censored than the original model and Omega finetunes (tested against
The-Omega-Concession-M-24B-v1.0).
IMO this is how Mistral 3 should have been from the start: it's still a bit strict in writing, but feels much better - like an actual step up from Mistral 2.
Note: this is still for Mistral Tekken prompt format, haven't tested it with ChatML format yet.