llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Qwisine 14B
Model details
| Field | Description |
|---|---|
| Base model | Qwenโ3-14B (preโtrained) |
| Fineโtuned by | Mugi |
| Task | QuestionโAnswering & Code Generation for the Convex TypeScript backend/database framework |
| Language(s) | English (developerโoriented) |
| License | NAH just use it. |
| Model name | Qwisine |
Qwisine is a specialised version of Qwenโ3 fineโtuned on curated Convex documentation & synthethic code and community Q&A. The model understands Convexโspecific concepts (data modelling, mutations, actions, idioms, client usage, etc.) and can generate code snippets or explain behaviour in plain English.
Intended use & limitations
Primary useโcase
- Conversational assistant for developers building on Convex.
- Drafting / Helping with convex orientated questions & tasks.
- Documentation chatbots or support assistants.
Outโofโscope
- Productionโcritical decision making without human review.
Dataset
Size : 938 Q&A pairs
Source: Convex official docs, example apps, public issues, community Discord, and synthetic edgeโcases.
Question types (distilled)
what_isโ factual lookโups (no reasoning)whyโ causal explanations (no reasoning)taskโ recipeโstyle howโto (with reasoning)edge_caseโ tricky or undocumented scenarios (with reasoning)vโtaskโ verbose multiโstep tasks (with reasoning)
Reasoningโbearing examples represent ~85โฏ% of the dataset.
Training procedure -- will add later since i ran & experimented MANY RUNS ๐ญ๐ญ๐ญ๐ญ
- Epochs : **
- Batch : **
- LR / schedule : **
- Optimizer : **
Fineโtuning followed standard QLORA with unsloth. No additional RLHF was applied.
Evaluation results
| Category | Think mode | Fully NonโThink mode |
|---|---|---|
| Fundamentals | 75.05โฏ% | 73.44โฏ% |
| Data modelling | 82.82โฏ% | 87.36โฏ% |
| Queries | 74.38โฏ% | 74.19โฏ% |
| Mutations | 71.04โฏ% | 73.59โฏ% |
| Actions | 63.05โฏ% | 49.27โฏ% |
| Idioms | 75.06โฏ% | 75.06โฏ% |
| Clients | 69.84โฏ% | 69.84โฏ% |
| Average | 73.03โฏ% | 71.82โฏ% |
Think Mode
| Parameter | Value | Notes |
|---|---|---|
temperature |
0.6 | Reasoned answers with structure |
top_p |
0.95 | Wider beam of sampling |
top_k |
20 | |
min_p |
0 |
Non-Think Mode
| Parameter | Value | Notes |
|---|---|---|
temperature |
0.7 | More diversity for simple prompts |
top_p |
0.8 | Slightly tighter sampling |
top_k |
20 | |
min_p |
0 |
Adjust as needed for your deployment; these were used in LM Studio during evaluation.
How to run locally
# LM Studio
search "Qwisine" in models menu.
# Ollama
il add soon.
# Llamaโcpp
il add soon.
Limitations & biases
- Training data is entirely Convexโcentred; the model may hallucinate.
- The dataset size is modest (938 samples); edgeโcase coverage is still incomplete and so is more complex prompts like create project from scratch with multiple steps and instructions.
Future work
not sure yet
Citation
@misc{qwisine2025,
title = {Qwisine: A Qwenโ3 model fineโtuned for Convex},
author = {mugi},
year = {2025},
url = {https://huggingface.co/mugivara1/Qwisine},
}
Acknowledgements
(To be completed)
Convex โข Qwenโ3 โขโฏ...
- Downloads last month
- 7
5-bit
Model tree for moogin/Qwisine
Base model
Qwen/Qwen3-14B-Base
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="moogin/Qwisine", filename="unsloth.Q5_K_M.gguf", )