Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
Jinx-Qwen3-14B
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
79
GGUF
Model size
15B params
Architecture
qwen3
Chat template
Hardware compatibility
Log In
to add your hardware
4-bit
Q4_K_M
9 GB
Q4_K_M_L
9 GB
Q4_K_M_XL
9.2 GB
Q4_K_M_XXL
9.58 GB
Q4_K_M_XXXL
11 GB
5-bit
Q5_K_M_L
10.6 GB
16-bit
F16
29.5 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
marcelone/Jinx-Qwen3-14B
Base model
Qwen/Qwen3-14B-Base
Finetuned
Qwen/Qwen3-14B
Finetuned
Jinx-org/Jinx-Qwen3-14B
Quantized
(
5
)
this model