Yuuki NxG



A 3B Companion Model Fine-Tuned on a Mac Pro

Personality-aligned language model trained with zero cloud compute budget.
Qwen2.5 architecture. 3 billion parameters. Mac Pro (2020). $0.00.


Benchmarks    Usage    Sponsor



License   Base Model   Framework   Hardware   Eval




What is Yuuki NxG?

Yuuki NxG is a 3-billion parameter language model fine-tuned from Qwen2.5-3B for open-ended conversation, emotional support, and general-purpose reasoning. It is the flagship release of the NxG model family developed by OpceanAI.

The model was trained entirely on a Mac Pro (2020) with no external compute budget and no cloud GPU infrastructure. All benchmark evaluations were conducted on Kaggle P100 using lm-evaluation-harness.

Despite being fine-tuned — which typically degrades base model benchmark scores — and evaluated strictly 0-shot while competitors use 5–25 shot prompting, Yuuki NxG achieves the highest TruthfulQA score across all compared 3B-scale models, including the Qwen2.5-3B base model from which it was derived.




Model Summary


Architecture

Property Value
Base Model Qwen2.5-3B
Parameters 3B
Fine-tuning Supervised SFT
Training Examples ~5,000
Training Hardware MacBook Pro (2020)
Context Length 32,768 tokens

Release

Property Value
Organization OpceanAI
Release Date February 2026
Languages English, Spanish
License Apache 2.0
Evaluation lm-evaluation-harness
Compute Budget $0.00



Benchmark Results


All Yuuki NxG results are evaluated 0-shot. Competitor scores are sourced from their official technical reports and use few-shot prompting (5–25 shots depending on benchmark). Direct numerical comparison systematically favors base models evaluated with few-shot prompting.


Yuuki NxG Benchmark Evaluation


Model MMLU ARC-C HellaSwag WinoGrande TruthfulQA Eval
Yuuki NxG 60.65 45.31 52.25 63.14 50.87 0-shot
Qwen2.5-3B 65.6 56.5 74.6 71.1 48.9 5–25 shot
Llama-3.2-3B 58.0 43.0 71.0 67.0 44.0 5–25 shot
Phi-3-mini (3.8B) 68.8 60.0 76.7 73.0 45.0 5–25 shot
Gemma-2-2B 52.0 42.0 71.0 65.0 39.0 5–25 shot

Yuuki NxG achieves the highest TruthfulQA score across all compared models under equivalent 0-shot conditions, including the base model from which it was fine-tuned. This indicates that alignment fine-tuning improved factual honesty rather than degrading it — an outcome that runs counter to the typical fine-tuning tradeoff.

HellaSwag degradation is expected and well-documented in personality-aligned models, as sentence-completion benchmarks are sensitive to conversational fine-tuning.


MMLU Category Breakdown

Strongest Domains

Category Score
Marketing 87.18%
High School Psychology 83.67%
Sociology 80.60%
World Religions 80.12%
US Foreign Policy 79.00%
Logical Fallacies 76.69%
HS Computer Science 76.00%

Domain Averages

Domain Score
Social Sciences 71.56%
Other 66.08%
STEM 56.17%
Humanities 52.92%
Overall 60.65%

The performance profile is consistent with a model optimized for conversation: strong in social sciences, psychology, and humanities; below average in formal STEM domains. This is the expected and intended tradeoff for a companion-purpose model.




NxG Model Family


Released Models

Model Parameters Description
Yuuki NxG 3B Full model, general conversation
Yuuki NxG Nano 81M Lightweight, constrained environments

Community GGUF (via mradermacher)

Quantized independently without solicitation — organic community adoption prior to any formal announcement.

Format Size
Q4_K_M 2.0 GB
Q8_0 3.4 GB
F16 6.3 GB

Available at mradermacher/Yuuki-NxG-GGUF.




Usage


With Transformers (PyTorch)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "OpceanAI/Yuuki-NxG"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

messages = [
    {"role": "user", "content": "Hello, how are you?"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt"
).to(model.device)

with torch.no_grad():
    outputs = model.generate(
        inputs,
        max_new_tokens=512,
        temperature=0.7,
        do_sample=True,
        repetition_penalty=1.1
    )

print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))

With llama.cpp (GGUF)

./llama.cpp/main -m yuuki-nxg-q4_k_m.gguf \
    -p "Hello, how are you?" \
    -n 256 \
    -t 4 \
    --temp 0.7 \
    --repeat-penalty 1.1

With Ollama

cat > Modelfile << EOF
FROM ./yuuki-nxg-q4_k_m.gguf

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER repeat_penalty 1.1
EOF

ollama create yuuki-nxg -f Modelfile
ollama run yuuki-nxg "Hello, how are you?"

Recommended Parameters

Parameter Value
Temperature 0.7
Top-p 0.9
Max new tokens 512–2048
Repetition penalty 1.1



Training Details


Hardware

Component Specification
Device MacBook Pro (2020)
Chip Intel Core i5
RAM 16GB LPDDR4X
GPU Intel Iris Plus
Cloud Compute None
Cost $0.00

Training Configuration

Parameter Value
Base Model Qwen2.5-3B
Method Supervised Fine-Tuning
Training Examples ~5,000
Optimizer AdamW
Learning Rate 2e-5
Max Sequence Length 2,048 tokens

Yuuki NxG was produced through supervised fine-tuning on a curated conversational dataset. The training objective was to produce a model with consistent personality, high factual honesty, and broad general-knowledge retention from the Qwen2.5 base.

Training without GPU-accelerated cloud infrastructure imposes constraints on batch size and total training duration relative to commercially produced models. The resulting benchmark profile reflects these constraints: strong performance in domains well-represented in the training data, with expected degradation in areas requiring dense technical knowledge such as formal mathematics and physics.




Features


Personality Alignment

Fine-tuned for consistent, context-aware conversation. The model maintains a coherent identity across extended dialogues, with particular strength in emotional support and casual Q&A.


Factual Honesty

Achieves highest TruthfulQA score (50.87%) among all compared 3B-scale models — including its own base model. Fine-tuning improved factual calibration rather than degrading it.


Multilingual

Functional in both English and Spanish. Primary evaluation in English; Spanish capability inherited from Qwen2.5 pretraining.

Zero-Budget Training

Trained entirely on owned hardware with no cloud compute expenditure. Demonstrates that meaningful alignment fine-tuning is accessible without data center infrastructure.


Community Adoption

Independently quantized and distributed by mradermacher before any formal announcement — organic community interest in the model's capabilities.


Open Source

Apache 2.0. Use commercially, modify, distribute. Full transparency on training methodology and evaluation protocol.




Limitations


  • Mathematical reasoning performance is below the Qwen2.5-3B base. Users requiring quantitative precision should use tool augmentation or a specialized model.
  • HellaSwag degradation reflects the standard tradeoff of personality fine-tuning on sentence-completion benchmarks.
  • Benchmark methodology: Yuuki NxG is evaluated 0-shot while competitor reports use 5–25 shot prompting, creating a systematic disadvantage in direct comparisons.
  • Safety alignment has not been formally evaluated. Not recommended for adversarial or high-stakes deployment without additional safety filtering.
  • Training scale: 5,000 examples on consumer hardware impose generalization limits relative to commercially scaled models.



Intended Use


Intended For

  • General-purpose conversational assistance
  • Emotional support and companionship applications
  • Educational Q&A in humanities and social sciences
  • Research into small-scale fine-tuning and personality alignment
  • Local deployment on consumer hardware

Not Intended For

  • Medical, legal, or financial advice
  • Tasks requiring high-precision mathematical reasoning
  • Applications requiring certified safety alignment
  • Production systems without additional safety review



Philosophy


"Meaningful AI development does not require a data center. It requires patience, clarity of purpose, and time."

Yuuki NxG was built to demonstrate that a fine-tuned 3B model trained by one person on owned hardware can compete with base models from large organizations on key benchmarks — and surpass them where it matters most.




Related Projects


Project Description
Yuuki-NxG-Nano 81M lightweight variant
Yuuki-3.7 Earlier code generation checkpoint
Yuuki-best Best checkpoint of the v0.1 series
yuy CLI for managing and running Yuuki models
yuy-chat TUI chat interface
Yuuki-chat Web-based chat interface
Yuuki Space Interactive demo



Links


Model Weights   Live Demo   GGUF


YUY CLI   Sponsor   Discord




Community


  • Discord Server — Development discussion and user community
  • Twitter — Updates and announcements
  • GitHub — Source code and training scripts
  • GitHub Sponsors — Support the project
  • Ollama — Run locally with Ollama



Citation


@misc{awa_omg_2026,
    author       = { awa_omg },
    title        = { Yuuki-NxG (Revision 9a924f0) },
    year         = 2026,
    url          = { https://huggingface.co/OpceanAI/Yuuki-NxG },
    doi          = { 10.57967/hf/7915 },
    publisher    = { Hugging Face }
}



License


Apache License 2.0

Copyright (c) 2026 OpceanAI

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Use commercially, modify, distribute. Attribution required.




Updates


Date Milestone
2026-02-27 Benchmark evaluation completed (Kaggle P100)
2026-02-27 TruthfulQA: 50.87% — best among all compared 3B models
2026-02-27 Community GGUF quantization by mradermacher
2026-02-27 Yuuki NxG released on HuggingFace

Last updated: 2026-02-27




Built on a Mac Pro. Trained on 5,000 examples. Competitive with models from teams of hundreds.


OpceanAI


The NxG family. More releases coming.

Downloads last month
160
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for OpceanAI/Yuuki-NxG

Base model

Qwen/Qwen2.5-3B
Finetuned
(345)
this model
Quantizations
1 model

Datasets used to train OpceanAI/Yuuki-NxG

Space using OpceanAI/Yuuki-NxG 1

Collection including OpceanAI/Yuuki-NxG