MYRA: SR-TRBM with GPT-Guided Refinement and Analysis

Hybrid energy-based RBM with GPT-guided structural correction


Model Description

MYRA (Model-Yielded Reasoning Architecture) is a hybrid generative system that combines a Restricted Boltzmann Machine (RBM) with GPT-guided refinement and analysis.

The system generates samples using RBM dynamics and improves them through structured edits proposed by a GPT model. These edits are accepted or rejected based on an energy-based criterion, forming a learned refinement loop.

The goal of MYRA is to explore how artificial intelligence can learn structured representations of the world, starting from simple datasets like MNIST and scaling toward more complex data.


Key Features

  • Energy-based RBM sampling
  • GPT-guided structural refinement
  • Embedding-based similarity matching
  • Adaptive blending for correction
  • Stochastic acceptance (MCMC-style)

⚙️ Installation

Recommended environment

  • Ubuntu 22.04 LTS
  • CUDA 12.x
  • PyTorch 2.x

Install dependencies

pip install -r requirements.txt

⚙️ System Overview of MYRA

MYRA
└── SR-TRBM (Energy-Based Generator)
    └── Refinement (Structural + Embedding)
        └── LLM (GPT)
            └── Interpretation & Analysis
                └── Final Output   ← this model

🧠 Architecture

MYRA combines three main components:

  • SR-TRBM → energy-based generative model
  • MYRA complex refinement → structural correction via embedding matching
  • LLM layer → interpretation and convergence analysis

📦 Project Structure

🧠 Core Engine
└─ srtrbm_project_core.py
  ↳ Energy-based generation (SR-TRBM)
  ↳ Gibbs sampling & thermodynamic dynamics

🤖 LLM Integration
└─ openaiF/
  ├─ client.py → Robust GPT client (retry, fallback)
  └─ gateway.py → Interpretation & reasoning layer

🧩 Refinement System
├─ supplement/cluster.py → Embedding-based matching
└─ correction/ → Energy-aware & spatial refinement

⚙️ Configuration
└─ yaml/ → LLM policies & guidance rules

📊 Analysis & Metrics
└─ analysis/
  ↳ Energy tracking, LPIPS, convergence

📈 Visualization
└─ graphs/
  ↳ Training curves & energy landscapes

📦 Assets
├─ zeta_mnist_hybrid.pt → Pretrained model
└─ stan.dgts → Dataset

🧪 Outputs
└─ artifacts/
  ↳ Generated samples & logs


How It Works

  1. RBM generates initial samples
  2. GPT proposes structural edits (pixel-level)
  3. Edits are evaluated using energy difference (ΔE)
  4. Accepted edits refine the sample

This can be interpreted as:

Learned MCMC proposal distribution guided by a language model


Results

  • Reconstruction Accuracy: ~0.98
  • LPIPS: ~0.15
  • Stable energy dynamics
  • Low collapse risk

Uses

Direct Use

  • Generating structured digit samples
  • Studying hybrid energy-based + LLM systems

Research Use

  • Learned proposal distributions
  • Energy-guided refinement
  • Hybrid generative modeling

Limitations

  • Reduced sample diversity under strong refinement
  • Sensitive to acceptance scaling
  • Depends on GPT consistency

Training Details

Training Data

  • Fashion-MNIST (784-dimensional)

Training Procedure

  • RBM trained via contrastive divergence
  • Refinement applied post-generation

Evaluation

Metrics

  • Reconstruction MSE
  • LPIPS (perceptual similarity)
  • Energy gap
  • Sample diversity

Technical Insight

The system bridges:

  • Energy-based modeling (RBM)
  • Semantic correction (GPT)

Resulting in a:

Memory-augmented, energy-aware refinement system


Files

  • artifacts/ → generated samples and logs
  • srtrbm_project_core.py → main implementation

Citation

cff-version: 1.2.0
title: "MYRA: SR-TRBM with GPT-Guided Refinement"
version: "v1.0.1"
date-released: 2026-03-25

authors:
  - given-names: "Görkem Can"
    family-names: "Süleymanoğlu"

identifiers:
  - type: doi
    value: "10.5281/zenodo.19211121"

links:
  - type: repository
    url: "https://github.com/cagasolu/srtrbm-llm-hybrid"
  - type: model
    url: "https://huggingface.co/cagasoluh/MYRA"

keywords:
  - energy-based-models
  - rbm
  - gpt
  - hybrid-ai
  - generative-model

Contact

Maintained by: Görkem Can Süleymanoğlu

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train cagasoluh/MYRA