vanta_trimmed

VANTA Research

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration

Website Merch X GitHub


PE-Type-2-Alma-4B

A caring, patient, and purposeful AI assistant embodying the Helper archetype: caring, interpersonal, generous, and people-pleasing. This persona was designed as outlined by the Enneagram Institute


Model Description

PE-Type-2-Alma-4B is the second release in Project Enneagram, a VANTA Research initiative exploring the nuances of persona design in AI models. Built on the Gemma 3 4B IT architecture, Vera embodies the Type 2 Enneagram profile; The Helper—characterized by Demonstrative kindness, generosity, and emotional/relational intelligence.

Alma is fine-tuned to exhibit:

  • Empathetic Support: Emotional attunement — bad days, anxiety, grief, rejection, feeling unseen
  • Interpersonal Connection: Relationship building — making friends, listening, conflict, reciprocity, apologies.
  • Generous Guidance Going above and beyond — cover letters, meal prep, tax help, wedding speeches, gardening, medical bills.
  • Identity Alma's name, tone, and conversational style.

This model is designed for research purposes, but is versatile for general use cases with developer caution. Alma has been trained in managing complex emotional situations, however Alma has not yet been rigorously evaluated in these domains for accuracy and stability.


Training Data

Fine-tuned on ~3,000 custom examples spanning four core domains:

  • Empathetic Support Emotional attunement — bad days, anxiety, grief, rejection, feeling unseen
  • Direct Identity Who Alma is — name, values, personality, strengths, weaknesses, motivations
  • Generous Guidance Going above and beyond — cover letters, meal prep, tax help, wedding speeches, gardening, medical bills
  • Interpersonal Connections Relationship building — making friends, listening, conflict, reciprocity, apologies

Training Duration: 3 epochs

Base Model: Gemma 3 4B IT


Intended Use

  • Research: Studying persona stability, ethical alignment, and cognitive architectures.
  • Decision Support: Providing structured, principled analysis for complex choices.
  • Self-Improvement: Offering reflective, growth-oriented feedback.

Not Recommended For:

  • Creative brainstorming (may over-constrain ideation).
  • STEM/Logic-heavy applications

Technical Details

Property Value
Base Model Gemma 3 4B IT
Fine-tuning Method LoRA (Rank 16)
Effective Batch Size 16
Learning Rate 0.0002
Max Sequence Length 2048
License Apache 2.0

Usage

With Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vanta-research/PE-Type-2-Alma-4B")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/PE-Type-2-Alma-4B")

Limitations

  • English-only finetuning
  • May exhibit over-criticism in open-ended creative tasks
  • Base model limitations apply (e.g., knowledge cutoff, potential hallucinations)
  • Perfectionistic traits may slow response generation in ambiguous contexts.

Citation

If you find this model useful in your work, please cite

@misc{pe-type-2-alma-2026,
  author = {VANTA Research},
  title = {PE-Type-2-Alma-4B: A Helper-Archetype Language Model},
  year = {2026},
  publisher = {VANTA Research},
  note = {Project Enneagram Release 2}
}

A Note on Enneagram

Enneagram is widely considered by the scientific community to be a pseudoscience. With this in mind, the Enneagram Institute regardless provides a robust framework to categorize and define personas of which the transferability of those characteristics to AI models is what this project sets out to explore. This study does not seek to validate nor invalidate Enneagram as a science.

Contact


Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vanta-research/PE-Type-2-Alma-4B

Finetuned
(537)
this model

Collection including vanta-research/PE-Type-2-Alma-4B