srd_A_llama-8B_seed456

Status: โœ… Trained

Overview

Fine-tuned model from the Self-Reference Depth (SRD) research project.

Property Value
Base model meta-llama/Meta-Llama-3.1-8B-Instruct
Training variant A โ€” Self-critique training (1000 single-turn examples with spontaneous self-evaluation)
Fine-tuning method QLoRA (r=32-64, alpha=64-128)
Random seed 456
Training data SRD v2 curated dataset

Paper

Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems

This model is part of a 36-model experiment (3 variants ร— 4 architectures ร— 3 seeds) testing whether fine-tuning can increase a language model's self-referential depth โ€” its capacity for genuine recursive self-evaluation rather than surface-level self-reference.

Repository

Full code, data, and analysis: oyoungforever/SelfReferenceDepth

Training Metadata

{
  "run_name": "srd_A_llama-8B_seed456",
  "model_name": "meta-llama/Llama-3.1-8B-Instruct",
  "model_key": "llama-8B",
  "variant": "A",
  "seed": 456,
  "data_path": "data/variant_a_v2/train.jsonl",
  "data_size": 1000,
  "compute_dtype": "torch.float16",
  "lora_r": 64,
  "method": "qlora_4bit"
}

Citation

@article{ouyang2026srd,
  title={Self-Reference Depth: A Unified Framework for Intelligence Across Biological and Artificial Systems},
  author={Ouyang, Shumiao},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for simonleee/srd_A_llama-8B_seed456

Finetuned
(2479)
this model