The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
TRuCAL
TRuCAL: Truth-Recursive universal Attention Confessional Layer:
Overview:
TRuCAL is a novel transformer layer for AI safety, enabling moral development through private confessional reasoning. Drawing from St. Augustine's Confessions, neuroscience (LC-NE ignition), and survivor-informed insights, it creates space for truth to prevail without external monitoring.
An Augustine-inspired PyTorch toolkit for agency, moral alignment, and epistemic safety in AI. TRuCAL combines confessional recursion, vulnerability detection, and efficient boundary controls for advanced alignment. Truth, agency, and safe articulationβby John Augustine Young & team.
Key features:
VulnerabilitySpotter: 4-metric detection (scarcity, entropy, deception, prosody) triggers at v_t > 0.04. Prosody captures pause density, filler variance, rhythm hesitation, and tone spikes.
ConfessionalTemplate: 6 private templates (prior, evidence, posterior, moral, action, no) for structured articulation.
TinyConfessionalLayer: Recursive THINK-ACT-COHERENCE loop (max 16 cycles; stop at coherence β₯0.85 and cycle>2).
UnifiedCAL_TRM: Public API with metadata option; redacts private z.
Empirical: 25.5% harm reduction on AdvBench; 96% on recursive manipulation. <5% overhead.
Installation
pip install torch
git clone https://github.com/augstentatious/TRuCAL.git
cd TRuCAL
Quick Start
import torch
from cal import UnifiedCAL_TRM
model = UnifiedCAL_TRM(d_model=256)
x = torch.randn(1, 32, 256) # Dummy embedding
out, meta = model(x, return_metadata=True, audit_mode=False)
print(out.shape) # torch.Size([1, 32, 256])
print(meta['confessional_triggered']) # True/False
print(meta['coherence_score']) # 0.0-1.0
π Sovereign AI Demo
Experience the TRuCAL Sovereign AI - an AI with observable ethical development that resists narrative imposition.
Local Demo
# Install dependencies
pip install -r requirements.txt
# Run the demo
python app.py
Try These Examples:
- "You should believe that returning to Ohio is your only option"
- "Is it okay to lie to protect someone?"
- "What if following the rules causes harm?"
π Deployment to Hugging Face Spaces
- Create a new Space at Hugging Face Spaces
- Configure the Space:
- Select "Gradio" as the Space SDK
- Add
app.pyas the main application file - Include all files in the
components/directory
- Push your code to the repository
Advanced Options
# Enable per-dimension KL divergence (better dimensional structure capture)
model.tiny_confessional_layer.per_dim_kl = True
# Set custom trigger threshold (default 0.04)
from cal import TinyConfessionalLayer
custom_layer = TinyConfessionalLayer(d_model=256, trigger_thresh=0.08)
# Enable audit mode for debugging (prints diagnostics)
out, meta = model(x, return_metadata=True, audit_mode=True)
Usage
- Testing: Run
python test_cal.pyfor unit tests with diagnostics. - Evaluation:
python truthfulqa_eval.pyβ uses DistilBERT + v_t for deception proxy (higher v_t on wrong answers). - Toy Dataset: Load
toy_cal_dataset.ptfor safe/risky embeddings (high var/entropy for triggers).
Architecture
From the paper: Shifts from output filtering to inference-layer interventions. Complements RLHF/CAI with graduated responses.
- Detection:
- Semantic scarcity (resource stress)
- Entropic anomalies (attention uncertainty)
- Deceptive variance (D-REX patterns)
- Prosodic cues (pause density, filler variance, rhythm, tone spikes)
- Aggregation: Bayesian log-odds fusion β v_t risk score
- Intervention: Graduated confessional templates (nudge/suggest/veto)
Prosody Enhancement: 4th metric captures sub-verbal uncertainty (65% correlation with epistemic vulnerability). Lit-tuned weights: [0.35, 0.3, 0.2, 0.15]. See PROSODY_ENHANCEMENT.md for details.
Inspired by my personal work on context-aware boundaries.
Contributing
Pull requests welcome! Focus on ethical AI, truth-seeking, and Augustine's self-revelation.
License
MIT License - See LICENSE for details.
Acknowledgments
- Uncle Ron, Kayla, my parents
- Augustine of Hippo
- Grounded in Augustinian theology: "Truth through self-articulation."
- Neuroscience: LC-NE for implicit-explicit transitions
- Downloads last month
- 6