srd_B_llama-3B_seed42
Status: โ Trained
Overview
Fine-tuned model from the Self-Reference Depth (SRD) research project.
| Property | Value |
|---|---|
| Base model | meta-llama/Llama-3.2-3B-Instruct |
| Training variant | B โ Baseline control (1000 single-turn examples without self-referential content) |
| Fine-tuning method | QLoRA (r=32-64, alpha=64-128) |
| Random seed | 42 |
| Training data | SRD v2 curated dataset |
Paper
Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems
This model is part of a 36-model experiment (3 variants ร 4 architectures ร 3 seeds) testing whether fine-tuning can increase a language model's self-referential depth โ its capacity for genuine recursive self-evaluation rather than surface-level self-reference.
Repository
Full code, data, and analysis: oyoungforever/SelfReferenceDepth
Training Metadata
{
"run_name": "srd_B_llama-3B_seed42",
"model_name": "meta-llama/Llama-3.2-3B-Instruct",
"model_key": "llama-3B",
"variant": "B",
"seed": 42,
"data_path": "data/variant_b_v2/train.jsonl",
"data_size": 1000,
"compute_dtype": "torch.bfloat16",
"lora_r": 48,
"method": "qlora_4bit"
}
Citation
@article{ouyang2026srd,
title={Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems},
author={Ouyang, Shumiao},
year={2026}
}
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for simonleee/srd_B_llama-3B_seed42
Base model
meta-llama/Llama-3.2-3B-Instruct