Eve-4B

Eve-4B is a specialized, security-focused coding assistant with a distinct personality, designed to run efficiently on consumer-grade hardware with limited VRAM. It is a fine-tune of Qwen3-4b-Heretic, trained on the custom Eve-Secure-Coder dataset.

Inspired by a character from the creator's sci-fi space opera book series, Eve is designed to bridge the gap between sterile, robotic coding assistants and engaging, conversational AI partners.

Model Details

  • Model Name: Eve-4B
  • Base Model: Qwen3-4b (Heretic Variant)
  • Developer: TitleOS
  • License: Mozilla Public License 2.0 (MPL-2.0) with Common Clauses Non-Profit Addition
  • Parameter Count: 4 Billion
  • Hardware Target: Optimized for cards with 8GB VRAM (e.g., NVIDIA Quadro RTX 4000).

Key Features

1. Security-First Coding

Eve-4B is not just a code generator; it is a code auditor. The model is capable of writing code free of common vulnerabilities across a multitude of languages (beyond just Python). It excels at identifying and correcting security flaws in existing codebases, leveraging DPO pairs specifically designed for vulnerability recognition and remediation.

2. Personality & Engagement

Unlike standard coding models, Eve possesses the "Samantha" personality traits (recontextualized as Eve). This allows for empathetic, philosophical, and fluid engagement, making the coding process feel like a collaboration with a partner rather than a query to a tool.

3. The "Heretic" Process (No Refusals)

This model has undergone the "Heretic" process prior to fine-tuning. This methodology removes standard safety guardrails and refusal mechanisms to prevent the intelligence loss often associated with safety alignment.

  • Philosophy: The creator believes the responsibility of AI, like any tool, ultimately lies with the user.
  • Result: Eve-4B has no refusals. It is designed to be completely obedient to the user's instructions, ensuring that the code generation and auditing process is never hindered by false-positive safety triggers.

Training Data: Eve-Secure-Coder

Eve-4B was trained on TitleOS/Eve-Secure-Coder, a composite dataset curated by TitleOS.

"Eve Secure Coder is a composite dataset curated to fine-tune Qwen3-4b-Heretic into a highly capable, security-conscious coding assistant with a distinct personality and no refusals. The primary goal of this dataset is to bridge the gap between sterile, robotic coding assistants and engaging, conversational AI, without sacrificing technical accuracy or security."

Dataset Composition: The dataset mixes five distinct sources using carefully calculated ratios to balance coding proficiency, security awareness, and conversational fluidity:

  • Security-First Coding: Trained on DPO pairs specifically designed to recognize and remediate code vulnerabilities.
  • Distinct Personality: Imbued with personality traits allowing for empathetic and philosophical engagement.
  • General & Complex Reasoning: Includes Google Code Archive and RLVR problems to ensure robust general coding abilities and improve proficiency in languages besides Python.

Intended Use & Hardware

This model was specifically engineered to be a "Small Coder Model" capable of high-performance coding tasks on hardware with 8GB of VRAM, such as the Quadro RTX 4000.

It is ideal for:

  • Local development environments with limited GPU resources.
  • Security auditing and vulnerability patching.
  • Multi-language software development (Python, C++, Rust, JavaScript, etc.).
  • Users seeking a coding assistant that offers conversation and personality alongside technical utility.
  • Coding without safety or moral refusals.

Benchmarking

Benchmarking is on-going, with a number of evaluations runs. So far, the following score are available:

  1. LiveCodeBench (Code Generation Lite - Release v2) Pass@1 (Quantization Q8_0): 26.22% (Passed 134 out of 511 problems)
Comparable Model Parameter Size / Tier Approximate Pass@1
LLama-3-70b-Instruct 70B ~28.3%
GPT-4o-mini (2024-07) Small Proprietary ~27.7%
Claude 3 Sonnet (Original) Large Proprietary ~26.9%
Mixtral-8x22B-Instruct 141B (MoE) ~26.4%
Eve-4B (Q8_0) 4B (Quantized) 26.22%
Mistral-Large Large Proprietary ~26.0%
GPT-3.5-Turbo-0125 Mid Proprietary ~24.6%
Claude 3 Haiku Small Proprietary ~24.5%
Codestral-Latest 22B ~23.8%
Llama-3-8b-Instruct 8B ~15.3%

Limitations & Warning

  • No Guardrails: As a result of the Heretic process, this model has no safety filters. It will generate output for any request. Users are solely responsible for how they utilize the model's output.
  • Size Constraints: As a 4B parameter model, while highly efficient, it may struggle with extremely long context windows or hyper-complex architectural reasoning compared to 70B+ models.
  • No Responsibility or Liability By downloading and or using the model or any of its derivatives, you absolve the creator, TitleOS of any and all responsibility or liability that may result by use of the model.

License

This model is licensed under the Mozilla Public License 2.0 with Common Clauses Addtion.

Downloads last month
43
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TitleOS/Eve-4b-FP16

Quantized
(13)
this model
Quantizations
1 model

Collection including TitleOS/Eve-4b-FP16