simonleee's picture
Upload SRD qlora model: variant B, llama-8B, seed42
bf9a8c9 verified
metadata
language: en
license: apache-2.0
tags:
  - self-reference-depth
  - srd
  - metacognition
  - fine-tuning
  - qlora
  - research
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct

srd_B_llama-8B_seed42

Status: ✅ Trained

Overview

Fine-tuned model from the Self-Reference Depth (SRD) research project.

Property Value
Base model meta-llama/Meta-Llama-3.1-8B-Instruct
Training variant B — Baseline control (1000 single-turn examples without self-referential content)
Fine-tuning method QLoRA (r=32-64, alpha=64-128)
Random seed 42
Training data SRD v2 curated dataset

Paper

Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems

This model is part of a 36-model experiment (3 variants × 4 architectures × 3 seeds) testing whether fine-tuning can increase a language model's self-referential depth — its capacity for genuine recursive self-evaluation rather than surface-level self-reference.

Repository

Full code, data, and analysis: oyoungforever/SelfReferenceDepth

Training Metadata

{
  "run_name": "srd_B_llama-8B_seed42",
  "model_name": "meta-llama/Llama-3.1-8B-Instruct",
  "model_key": "llama-8B",
  "variant": "B",
  "seed": 42,
  "data_path": "data/variant_b_v2/train.jsonl",
  "data_size": 1000,
  "compute_dtype": "torch.float16",
  "lora_r": 64,
  "method": "qlora_4bit"
}

Citation

@article{ouyang2026srd,
  title={Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems},
  author={Ouyang, Shumiao},
  year={2026}
}