🦊 Fox1.4 - Reasoning Specialist

Fox1.4 is Fox1.3's successor, trained on combined data from math, logic, knowledge, and code reasoning tasks.

Performance

Custom Benchmark (10 questions):

  • βœ… All tasks: 100%
  • Penguin exception logic: βœ…
  • $1.10 riddle: βœ…
  • Math (2+2, 15+27, 100/4, 7*8): βœ…
  • Knowledge (France, Jupiter): βœ…
  • Code (is_even): βœ…

Estimated MMLU Score: ~40-50%

Architecture

  • Base Model: Qwen2.5-0.5B (merged with LoRA adapter)
  • Training: Combined data from 4 expert domains
  • Parameters: ~900M
  • Format: Full merged model (safetensors)

Usage

Ollama

ollama pull teolm30/fox1.4
ollama run fox1.4

Python

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("teolm30/fox1.4")
tokenizer = AutoTokenizer.from_pretrained("teolm30/fox1.4")

inputs = tokenizer("Your question", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

HuggingFace Inference

Click the "Use this model" button above to run inference directly on HuggingFace.

Comparison

Feature Fox1.3 Fox1.4
Base Qwen2.5-0.5B Qwen2.5-0.5B
Training LoRA Merged LoRA
Format GGUF Safetensors
Custom Benchmark 100% 100%
Size ~1 GB ~1 GB

Model Details

  • Parameters: ~900M
  • Context Length: 16K
  • Quantization: None (full bf16)
  • Hardware: Runs on CPU or GPU

Fox1.4 β€” focused reasoning at its best.

Downloads last month
26
Safetensors
Model size
0.5B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for teolm30/fox1.4

Quantizations
1 model