fox1.4 / README.md
teolm30's picture
Remove text-generation tag
d2f5a50 verified
metadata
title: Fox1.4
emoji: 🦊
colorFrom: blue
colorTo: purple
sdk: static
app_port: 7860
pinned: false
license: apache-2.0
tags:
  - transformers
  - greek
  - fine-tuned
  - causal-lm
  - qwen
  - qwen2
  - reasoning
model_type: qwen2
widget:
  - text: What is 2+2?
  - text: 'Solve this riddle: I have hands but cannot clap'
  - text: Write python code to check if a number is prime
inference:
  minutes: 10

🦊 Fox1.4 - Reasoning Specialist

Fox1.4 is Fox1.3's successor, trained on combined data from math, logic, knowledge, and code reasoning tasks.

Performance

Custom Benchmark (10 questions):

  • βœ… All tasks: 100%
  • Penguin exception logic: βœ…
  • $1.10 riddle: βœ…
  • Math (2+2, 15+27, 100/4, 7*8): βœ…
  • Knowledge (France, Jupiter): βœ…
  • Code (is_even): βœ…

Estimated MMLU Score: ~40-50%

Architecture

  • Base Model: Qwen2.5-0.5B (merged with LoRA adapter)
  • Training: Combined data from 4 expert domains
  • Parameters: ~900M
  • Format: Full merged model (safetensors)

Usage

Ollama

ollama pull teolm30/fox1.4
ollama run fox1.4

Python

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("teolm30/fox1.4")
tokenizer = AutoTokenizer.from_pretrained("teolm30/fox1.4")

inputs = tokenizer("What is 2+2?", return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0]))