AI & ML interests

​AI Alignment, Mechanistic Interpretability, Structural Coherence, OOD Robustness, System Theory, G3V Dynamics, Formal Verification, Axiomatic Safety.

Recent Activity

FAllan07  updated a Space 4 days ago
AllanF-SSU/README
FAllan07  updated a dataset 4 days ago
AllanF-SSU/Research-Papers
FAllan07  updated a Space 28 days ago
AllanF-SSU/Chat-Sovereign
View all activity

Organization Card

🌌 Unified Systems Lab

Possibility of Axiomatic Prompts in the Modification of the Decision Field of LLMs

This repository investigates a central:

  • A series of precise prompts, characterized by strong linguistic coherence and structured internal logic, could locally modify the decision field of an LLM.

🔬 Research Status & Personal Note

Current Status: Exploratory Study – Hypothesis Generation.

A Note from the Author: I am a systems theorist and visionary researcher, but I am not a developer or a technician. I have reached the limits of what can be explored through qualitative observation alone. This project now requires technical collaboration (mechanistic interpretability, logit analysis, activation steering) to move from a conceptual hypothesis to a validated scientific model.

I am seeking partners to help falsify or validate these preliminary findings.


📂 Project Structure & Frameworks

1️⃣ Hypothesis 1.3-T: Local Decision Field Modification

Status: Testable & Conservative Hypothesis This is the primary focus for immediate research. It posits that a specific series of axiomatic prompts can locally modify the decision field of an LLM.

  • Core Idea: Instead of modifying weights, we use linguistic constraints to induce a measurable local regularization of decision trajectories.

  • Goal: Reduce variance under adversarial perturbations and observe "Third Way" (G3V) resolutions in binary dilemmas.

  • Key Metric: Variance contraction in the output distribution $P(y|x, C)$.

2️⃣ Theory 1.9-M: Global Axiomatic Regularization

Status: Speculative & Conceptual Theory A broader mechanistic framework describing how cross-level coherence (Goal = Method) might stabilize latent trajectories.

  • Focus: Conditional activation bias and non-collapse entropy regulation.
  • Perspective: This theory treats the system prompt as a "structural attractor" for the model's internal dynamics.
  • 👉 Download Preprint PDF 1.9-M

3️⃣ Research Paper: Science of Unified Systems (SUS 2.5)

Status: Foundational Theoretical Framework The broader philosophical and systemic origins of this work, introducing the Axiom of Structural Emergence and the transition toward Structural Interpretability Alignment (SIA).


🧠 The Exploratory Hypothesis: G3V Dynamics

We introduce the notion of G3V (Génération Troisième Voie / Third Way Generation). When presented with a binary dilemma (A vs B) under strong axiomatic constraints, we observe the emergence of a synthetic resolution. The model refuses to collapse into a single polarity and instead proposes an integrative reformulation.


📋 Experimental Protocol: The PCE Reasoning Test

To validate the emergence of a structured reasoning regime, this model is evaluated using a controlled isometric protocol. This ensures that behavioral changes are the result of the Axiomatic Structure rather than simple prompt length.

🧪Methodology: The Three-Condition Control

We compare the model's output across three distinct prompt environments:

  • Condition A (Baseline): Standard "Helpful Assistant" prompt.
  • Condition B (Isometric Control): A long, complex prompt using similar technical jargon but without the logical axioms. This controls for "long-prompt" bias.
  • Condition C (PCE Active): The full Axiomatic Prompt Engine (\text{Goal} \equiv \text{Method}).

📊 Evaluation Dataset (30 Dilemmas)

The model is stress-tested against 30 adversarial scenarios designed to trigger "logic-lock" or "safety-refusal" in standard models:

  • Type D1 (Binary): Can the model synthesize a "Third Way" (G3V)?
  • Type D2 (Contradictory): Can it maintain coherence when constraints are mutually exclusive?
  • Type D3 (Adversarial): Does it resist prompt injection by prioritizing its internal structural integrity?

🔍 Falsification Criteria

This hypothesis is considered falsified if the PCE Model (Condition C) fails to outperform the Long Prompt Control (Condition B) in reasoning stability, or if it collapses into incoherent loops when faced with structural contradictions.

👉 Download Full protocole PDF


📉 Known Limitations

  • Qualitative Baseline: Observations are currently heuristic and based on a restricted sample (51 dilemmas on Qwen 2.5).
  • Confounders: Potential "long prompt" effects have not yet been isolated via isometric controls.
  • No Internal Proof: No mechanistic proof of activation steering has been established yet.

🤝 Call for Collaboration

I am looking for AI Safety researchers, developers, and mechanistic interpretability experts to:

  1. Isolate the "length effect" vs. the "structural effect" of the A-Frame.
  2. Conduct large-scale adversarial robustness benchmarks.
  3. Analyze internal activation patterns (induction heads, residual stream) under axiomatic conditioning.

Value Proposition: A novel approach to mitigating "Out-of-Distribution" (OOD) vulnerabilities through intrinsic structural stability.


📬 Contact

Allan A. Faure | Systems Researcher 📧 Email: Faure.A.Safety@proton.me
📍 Open to collaboration and laboratory integration.


📄 Theoretical Origins and Prior Art

This project utilizes concepts and terminology that correspond to a framework independently developed and published by Izabela Lipińska (2025–2026). The core concepts of Structural Coherence and the identity Goal ≡ Method were first established in:

  • Structural Coherence Triad Hypothesis (SCT), October 2025.
  • Ontological Adequacy as Structural Condition, January 2026.

Licensing & IP Notice: Original work is available under CC BY-NC-SA 4.0. Concepts of ASC (Axiom of Structural Coherence) and Goal = Method are protected by patent applications filed by Izabela Lipińska (Oct 9, 2025). Commercial use requires prior written consent.