TinyLlama-1.1B-HolyC Layer 1

HolyC TinyLlama

Layer 1 is the explanatory adapter in this stack. It was tuned to make TinyLlama more fluent in reading HolyC, describing what TempleOS routines are doing, and staying on-topic when the source looks wonderfully unhinged to everyone except Terry Davis.

What It Is Good At

  • explaining HolyC functions and subsystems in plain language
  • staying grounded in TempleOS-flavored code and naming conventions
  • acting as the interpretive layer before a more generation-heavy second pass

Training Snapshot

This adapter was fine-tuned from TinyLlama/TinyLlama-1.1B-Chat-v1.0 for a HolyC explanation task. The existing training run produced strong early gains and a stable learning curve.

Training metrics

Summary metrics from the preserved run logs:

Metric Value
Initial loss 1.4824
Final logged loss 0.6263
Best logged loss 0.4445
Initial mean token accuracy 0.6967
Final logged mean token accuracy 0.8483
Best logged mean token accuracy 0.8850
Training summary

Training Data

Layer 1 is associated with the explanatory side of the release:

  • Aptlantis/holyC-tinyllama-two-layer: umbrella dataset bundle
  • explanation dataset: datasets/explanations/holyC_finetune.jsonl inside that bundle, 3448 records
  • codebase dataset: datasets/codebase/holyC_codebase.jsonl inside that bundle, 3448 records

The explanatory dataset pairs HolyC code with “explain what this function does” style supervision. The codebase corpus provides the raw substrate those samples were drawn from.

How To Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_path = "./layer1"

tokenizer = AutoTokenizer.from_pretrained(adapter_path)
base_model = AutoModelForCausalLM.from_pretrained(base_id)
model = PeftModel.from_pretrained(base_model, adapter_path)

For local use inside this release bundle, point adapter_path at the layer1 directory. For Hugging Face use, replace it with the uploaded repo ID.

Intended Use

Use layer 1 when you want:

  • HolyC-aware explanations
  • function walkthroughs
  • a first-stage adapter that helps the model read TempleOS code before a second generation pass

Limitations

  • It is a LoRA adapter, not a merged standalone model.
  • It inherits the strengths and limits of the TinyLlama 1.1B chat base.
  • The preserved training metrics are partial run artifacts rather than a full benchmark suite.
  • HolyC fluency does not imply broader compiler correctness, systems safety, or formal verification.

Notes

This is the first-layer adapter from the original project. In the full two-layer release it serves as the interpretive stage: the part that says, “yes, that function is wild, but here is what it is trying to do.”

Downloads last month
47
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for Aptlantis/TinyLlama-1.1B-HolyC

Adapter
(1433)
this model