QuantFactory/Fox-1-1.6B-GGUF

This is quantized version of QuantFactory/Fox-1-1.6B-GGUF created using llama.cpp

Model Card for Fox-1-1.6B

This model is a base pretrained model which requires further finetuning for most use cases. We will release the instruction-tuned version soon.

Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed by TensorOpera AI. The model was trained with a 3-stage data curriculum on 3 trillion tokens of text and code data in 8K sequence length. Fox-1 uses grouped query attention (GQA) with 4 KV heads and 16 attention heads and has a deeper architecture than other SLMs.

For the full details of this model please read our release blog post.

Downloads last month
96
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/Fox-1-1.6B-GGUF

Quantized
(3)
this model