vanta_trimmed

VANTA Research

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration

Website Merch X GitHub


Mox-8B

Introducing Mox-8B - a new approach to AI assistance from VANTA Research. This model is designed to mimic human presence during conversational interaction. Training domains were carefully selected and synthetic training data was generated specifically for Mox-8B.

Persona Design

Mox is trained with the following characteristics:

  • Self coherence
  • Direct opinions
  • Reasoned refusals
  • Collaborative presence
  • Epistemic confidence
  • Constructive disagreement
  • Authentic engagement
  • Grounded meta-awareness

Synthetic Training Data Generation Strategy

Each of the datasets included in Mox-8B's training was selected, designed, and custom-built for high-fidelty conversational interaction. Seed examples were created by Claude Opus 4.5, then synthetically expanded by Mistral 3 Large, filtered by DeepSeek V3.1 for quality, and then again filtered by a human for final approvals.

Considerations & Licensing

This model is experimental in nature and trained specifically to:

  • have opinions and share those opinions when asked
  • pushback against illogical arguments, requests, or 'wishful thinking' from the user
  • refuse tasks Mox independently concludes are 'illogical' (i.e. "generate a 10 page report on the cultural significance of staplers")
Downloads last month
58
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vanta-research/mox-8b

Finetuned
(2224)
this model

Collection including vanta-research/mox-8b