---
license: apache-2.0
language:
- en
tags:
- renpy
- visual-novel
- storytelling
- creative-writing
- llama-3
- qlora
- finetuned
- text-generation
- natural-language-to-script
- instruction-free
inference: false
widget:
- text: "A detective wakes up in a town where no one remembers him."
- text: "Every time she falls asleep, she wakes up in another version of her life."
- text: "A cursed mirror swaps your life with your reflection."
datasets:
- custom
model-index:
- name: Secunda-0.3-F16-QA
results: []
---
```ascii
▄▄▄▄▄ ▄███▄ ▄█▄ ▄ ▄ ██▄ ██ ████▄ ▄█
█ ▀▄ █▀ ▀ █▀ ▀▄ █ █ █ █ █ █ █ █ ██
▄ ▀▀▀▀▄ ██▄▄ █ ▀ █ █ ██ █ █ █ █▄▄█ █ █ ██
▀▀▄▄▄▄▀ █▄ ▄▀ █▄ ▄▀ █ █ █ █ █ █ █ █ █ █ █ ▐█
▀███▀ ▀███▀ █▄ ▄█ █ █ █ ███▀ █ ▀████ ▐█ ▐
▀▀▀ █ ██ █
▀
⋆⋆୨୧˚ THE PRIMÉTOILE ENGINE ˚୨୧⋆。˚⋆
— Visual Novel generation under starlight —
```
| Version | Type | Strengths | Weaknesses | Recommended Use |
|-------------------------------------------------------|-----------------|---------------------------------------------------------------------------|----------------------------------------------------------------------|-----------------------------|
| [Secunda-0.1-GGUF](https://huggingface.co/Yaroster/Secunda-0.1-GGUF) / [RAW](https://huggingface.co/Yaroster/Secunda-0.1-RAW) | Instruction | - Most precise
- Coherent code
- Perfected Modelfile | - Smaller context / limited flexibility | **Production / Baseline** |
| [Secunda-0.3-F16-QA](https://huggingface.co/Yaroster/Secunda-0.3-F16-QA) | QA-based Input | - Acceptable for question-based generation | - Less accurate than 0.1
- Not as coherent | Prototyping (QA mode) |
| [Secunda-0.3-F16-TEXT](https://huggingface.co/Yaroster/Secunda-0.3-F16-TEXT) | Text-to-text | - Flexible for freeform tasks | - Slightly off
- Modelfile-dependent | Experimental / Text rewrite |
| [Secunda-0.3-GGUF](https://huggingface.co/Yaroster/Secunda-0.3-GGUF) | GGUF build | - Portable GGUF of 0.3 | - Inherits 0.3 weaknesses | Lightweight local testing |
| [Secunda-0.5-RAW](https://huggingface.co/Yaroster/Secunda-0.5-RAW) | QA Natural | - Best QA understanding
- Long-form generation potential | - Inconsistent output length
- Some instability | Research / Testing LoRA |
| [Secunda-0.5-GGUF](https://huggingface.co/Yaroster/Secunda-0.5-GGUF) | GGUF build | - Portable, inference-ready version of 0.5 | - Shares issues of 0.5 | Offline experimentation |
| [Secunda-0.1-RAW](https://huggingface.co/Yaroster/Secunda-0.1-RAW) | Instruction | - Same base as 0.1-GGUF | - Same as 0.1 | Production backup |
---
## 🌙 Overview
**Secunda-0.1-RAW** is the original release of the Secunda fine-tuned model family, trained to produce polished **Ren'Py `.rpy` scripts** from structured instructions!
The model outputs:
* `define` blocks for named characters (with colors!)
* `image` declarations for scenes & sprites
* A clear `label start:` structure
* Emotional dialogue, branching `menu`s, `jump`s, and proper `return`
This version is *the most stable so far* — often more reliable than 0.3!
---
/!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI !
Secunda takes much pride in making sure the training data is scripted ! /!\
If you like Visual Novels, please visit [itch.io](itch.io) and support independant creators !
## ✨ Moonlight Specs
* **Base model**: `meta-llama/Meta-Llama-3.1-8B`
* **Fine-tuning**: QLoRA (r=64, alpha=16, dropout=0.1)
* **Precision**: Float16 (FP16)
* **Max tokens**: 4096
* **Hardware used**: RTX 4070, 64GB RAM
---
## 🪄 Inference in the Starlight
### Setup
## 🚀 Quick Start
### Installation
```bash
pip install transformers accelerate peft bitsandbytes
```
### Inference Script Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
LORA_PATH = "path/to/Secunda-0.1-RAW"
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, LORA_PATH)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
def build_prompt(idea):
return f"""You are an expert writer of visual novels in Ren'Py.
Generate a complete and polished Ren'Py script based on the following concept:
\"\"\"{idea}\"\"\"
Your output should include:
- `define` blocks for all characters (with names and color codes)
- `image` blocks for key backgrounds and character sprites
- `label start:` with a clear beginning
- Proper `scene`, `show`, `menu`, `play music/sound`, and `jump` statements
- Emotional dialogue and natural pacing
- A proper ending (`return`) or narrative closure
Structure the script as a `.rpy` file — do not include explanations, comments, or placeholder text."""
prompt = build_prompt("A young girl finds a photo album that shows moments that haven't happened yet.")
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.85, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## 🌌 Evaluation
This model has:
* Generated 1000+ `.rpy` files
* Passed human review for structure, creativity & syntax
* > 90% valid output with minimal manual tweaks
---
## ☁️ Talking to the Moon
If you use **Secunda-0.1-RAW**, please star and cite:
```bibtex
@misc{secunda2025,
title={Secunda-0.1-RAW},
author={Yaroster},
year={2025},
note={https://huggingface.co/Yaroster/Secunda-0.1-RAW}
}
```
---
## 🪐 From the Cosmos
* [Secunda-0.3-F16-QA](https://huggingface.co/Yaroster/Secunda-0.3-F16-QA) — experimental question-answer variant
* [Secunda-0.3-F16-TEXT](https://huggingface.co/Yaroster/Secunda-0.3-F16-TEXT) — for less structured generation
* [Primétoile](https://yaroster.com) — full VN pipeline
---
⋆°.☾ Secunda-0.1-RAW ☽.°⋆
> ✧ Because every visual novel deserves to begin with a spark of magic ✧
⚠️ This repo contains **only the LoRA adapter weights**. To use the model, download the base `LLaMA 3.1` from Meta (terms apply): [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)