You have a crappy pc an old laptop or even an old mobile phone! Now it doesn't matter, you can run a full model that is trained in 36trillions. If is writing, coding, health care or general questions, this model got you. It is really tiny and is Hybrid and fully functional. Tested with a ctx of 130000 CTX, in different tasks. Great job to Aquif team, this model is a small badass. Quants Available at: https://huggingface.co/Trilogix1/Hugston-aquif-3.6-1B_F32-Hybrid


Trilogix1/Hugston_code-rl-Qwen3-4B-Instruct-2507-SFT-30b pipeline_tag: text-generation tags:

Qwen3

Small

Hybrid

Hugston


Original weights at: https://huggingface.co/aquif-ai/aquif-3.6-1B

This model is converted and quantized version by Hugston Team created with Quanta (see Github to get it for free). This is a real, proof-of-concept and implementation on how to convert and quantize a .safetensor llm model in GGUF.

Screenshot 2025-11-21 114116

Quantization was performed using an automatic and faster method, which leads to less time and faster results.

This model was made possible by: https://Hugston.com

You can use the model with HugstonOne Enterprise Edition

Tested in general tasks.

Screenshot 2025-12-04 222029


Watch HugstonOne coding and preview in action:

https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci

-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework

-Download model from https://hugston.com/explore?folder=llm_models or Huggingface

-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server


-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.


-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.

Downloads last month
163
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Trilogix1/Hugston-aquif-3.6-1B_F32-Hybrid

Finetuned
Qwen/Qwen3-1.7B
Quantized
(131)
this model