--- title: README emoji: 🚀 colorFrom: green colorTo: pink sdk: docker pinned: true license: apache-2.0 short_description: Building safe and resilient partners in thought. thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/686c460ba3fc457ad14ab6f8/ffK_ttxC_5boGIKNMVOZc.png ---
![vanta_trimmed](https://cdn-uploads.huggingface.co/production/uploads/686c460ba3fc457ad14ab6f8/hcGtMtCIizEZG_OuCvfac.png)

VANTA Research

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration

Website X GitHub

--- ## Mission VANTA Research develops language models highly optimized for human-AI collaboration. Additionally, VANTA Research produces work in: 1. **Pushing beyond standard benchmarks** - Surface capabilities invisible to traditional evaluation 2. **Exposing where models collapse, deceive, or diverge** - Systematic stress-testing for safety 3. **Developing innovative tooling to advance AI research** - Open-source frameworks for the community We believe AI safety research should be accessible, transparent, and built for cognitive diversity. --- ## Featured Models ### [**Atom-Olmo3-7B**](https://huggingface.co/vanta-research/atom-olmo3-7b) Specialized language model fine-tuned for collaborative problem-solving and creative exploration. Built on the Olmo-3-7B-Instruct foundation, this model brings thoughtful, structured analysis to complex questions while maintaining an engaging, conversational tone. ### [**Mox-Tiny-1**](https://huggingface.co/vanta-research/mox-tiny-1) Unlike traditional assistants that optimize for user satisfaction through validation, Mox will: - Give you direct opinions instead of endless hedging - Push back when your premise is flawed - Admit uncertainty rather than fake confidence - Engage with genuine curiosity and occasional humor. *Looking for quantizations of our models? We try to include 4 bit quants in each model repo, but if you are looking for more quantization types, we recommend [team mradermacher](https://huggingface.co/mradermacher) who regularly provide high quality quantizations of our models in various sizes/formats. Thank you, team mradermacher.* --- ## Research Contributions ### **VRRE (VANTA Research Reasoning Evaluation)** Novel semantic-based benchmark that detected a **2.5x reasoning improvement** completely invisible to standard benchmarks. This suggests we're systematically missing capability improvements when we "teach to the test." [Read the paper →](https://zenodo.org/records/17162683) ### **Persona Collapse Framework** Systematic characterization of reproducible failure modes in LLMs under atypical cognitive stress. Identifies alignment blind spots invisible to standard evaluations. [Read the paper →](https://zenodo.org/records/17188172) ### **Cognitive Fit vs. Alignment** Argument for personalized synchronization in AI systems rather than universal "alignment" - recognizing that optimal model behavior depends on user's cognitive style. [Read the paper →](https://zenodo.org/records/17346467) --- ## Open Source Philosophy We stand on the shoulders of the open-source contributors who came before us. Our commitment is to contribute back and make AI development more accessible, transparent, and beneficial for all. --- ## Connect **Interested in sponsoring our open source work?** We are actively seeking partnerships/sponserships for API access/compute/funding. VANTA Research is completely independent and self-funded. If you'd like to fuel our contributions to open source AI, please reach out using one of the methods below: - **Email:** hello@vantaresearch.xyz - **Website:** [vantaresearch.xyz](https://vantaresearch.xyz) - **Twitter/X:** [@vanta_research](https://x.com/vanta_research) - **GitHub:** [vanta-research](https://github.com/vanta-research) - **Support:** [VANTA Research Merch Store](https://merch.vantaresearch.xyz) - *100% of proceeds from the online store are reinvested back into VANTA Research's open source contributions (compute, hardware, APIs, storage, etc).* ---