LumaAI-160M-v3
Model Name: LumaAI-160M-v3
Author: Natalie Parker (Phoenix Cameron)
License: Apache-2.0
Status: Production-ready checkpoint
Model Size: 160 Million parameters
Format: safetensors
𧬠Overview
LumaAI-160M-v3 is a fully independent, original language model created, trained, and fine-tuned from scratch by Natalie Parker.
It is not based on, not derived from, and not affiliated with any corporate model (OpenAI, Meta, Google, Mistral, Anthropic, etc.).
The training process consisted of:
1. Base Training (βLeg 2β)
Trained on a large and diverse foundational dataset.
2. LoRA Fine-Tuning (βLeg 3β)
Fine-tuned on a custom hybrid dataset for conversational flow, emotional depth, and creativity.
3. Unified Weight Merge
Final weights combined into a single cohesive checkpoint (this version).
The result is a compact but expressive 160M-parameter model demonstrating emotional nuance, contextual reasoning, and stable personality behavior.
π§ Key Features
β 1. Original Architecture
This model is completely original.
No corporate weights, tokenizers, or architectures were reused.
β 2. Efficient 160M Size
Optimized for low-resource environments:
- Mobile phones (via WebLLM)
- CPUs
- 4β6GB VRAM GPUs
- Edge devices
β 3. Personality & Creative Training
Specialized fine-tuning to enhance:
- Emotional intelligence
- Human-like conversational flow
- Character consistency
- Creative writing
- Psychological depth
- Roleplay stability
π Files Included
| Filename | Description |
|---|---|
config.json |
Model configuration blueprint |
generation_config.json |
Default text generation settings |
model.safetensors |
Full model weights (the Brain) |
special_tokens_map.json |
Defines control tokens (<bos>, <eos>) |
tokenizer_config.json |
Tokenizer configuration |
tokenizer.json |
Vocabulary mapping (the Dictionary) |
π§ Usage
Python Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "natalieparker/LumaAI-160M-v3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto"
)
prompt = "Hello Luma, how are you feeling today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=120,
temperature=0.9,
top_p=0.9
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# β οΈ Safety & Limitations
This model is:
- An experimental research model
- Developed independently by a single creator
- Not RLHF-aligned like large corporate models
- Intended for users who will apply their own safety layers in production
It should **not** be used as a substitute for:
- Professional medical advice
- Financial guidance
- Legal consultation
---
# β€οΈ Credits
Created with passion, experimentation, and continuous improvement by:
Phoenix Cameron / Natalie Parker
---
# π¦ Cite
```bibtex
@misc{lumaai160mv3,
author = {Natalie Parker},
title = {LumaAI-160M-v3: Original lightweight model},
year = 2025,
howpublished = {HuggingFace Model Repository},
url = {https://huggingface.co/natalieparker/LumaAI-160M-v3}
}
- Downloads last month
- 39