LumaAI-160M-v3

Model Name: LumaAI-160M-v3
Author: Natalie Parker (Phoenix Cameron)
License: Apache-2.0
Status: Production-ready checkpoint
Model Size: 160 Million parameters
Format: safetensors


🧬 Overview

LumaAI-160M-v3 is a fully independent, original language model created, trained, and fine-tuned from scratch by Natalie Parker.

It is not based on, not derived from, and not affiliated with any corporate model (OpenAI, Meta, Google, Mistral, Anthropic, etc.).

The training process consisted of:

1. Base Training (β€œLeg 2”)

Trained on a large and diverse foundational dataset.

2. LoRA Fine-Tuning (β€œLeg 3”)

Fine-tuned on a custom hybrid dataset for conversational flow, emotional depth, and creativity.

3. Unified Weight Merge

Final weights combined into a single cohesive checkpoint (this version).

The result is a compact but expressive 160M-parameter model demonstrating emotional nuance, contextual reasoning, and stable personality behavior.


🧠 Key Features

⭐ 1. Original Architecture

This model is completely original.
No corporate weights, tokenizers, or architectures were reused.

⭐ 2. Efficient 160M Size

Optimized for low-resource environments:

  • Mobile phones (via WebLLM)
  • CPUs
  • 4–6GB VRAM GPUs
  • Edge devices

⭐ 3. Personality & Creative Training

Specialized fine-tuning to enhance:

  • Emotional intelligence
  • Human-like conversational flow
  • Character consistency
  • Creative writing
  • Psychological depth
  • Roleplay stability

πŸ“ Files Included

Filename Description
config.json Model configuration blueprint
generation_config.json Default text generation settings
model.safetensors Full model weights (the Brain)
special_tokens_map.json Defines control tokens (<bos>, <eos>)
tokenizer_config.json Tokenizer configuration
tokenizer.json Vocabulary mapping (the Dictionary)

πŸ”§ Usage

Python Example

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "natalieparker/LumaAI-160M-v3"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto"
)

prompt = "Hello Luma, how are you feeling today?"

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
    **inputs,
    max_new_tokens=120,
    temperature=0.9,
    top_p=0.9
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

# ⚠️ Safety & Limitations

This model is:

- An experimental research model  
- Developed independently by a single creator  
- Not RLHF-aligned like large corporate models  
- Intended for users who will apply their own safety layers in production  

It should **not** be used as a substitute for:

- Professional medical advice  
- Financial guidance  
- Legal consultation  


---

# ❀️ Credits

Created with passion, experimentation, and continuous improvement by:

Phoenix Cameron / Natalie Parker


---

# πŸ“¦ Cite

```bibtex
@misc{lumaai160mv3,
  author       = {Natalie Parker},
  title        = {LumaAI-160M-v3: Original lightweight model},
  year         = 2025,
  howpublished = {HuggingFace Model Repository},
  url          = {https://huggingface.co/natalieparker/LumaAI-160M-v3}
}
Downloads last month
39
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using natalieparker/LumaAI-160M-v3 1