YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Phase-Induced Coherence-Gated Gradient Descent

This repository contains checkpoint files for the research prototype Phase-Induced Coherence-Gated Gradient Descent (PIC-GD).

The project tests whether neural network training improves when latent representations are treated as complex signals and the training signal is modulated by phase coherence between model and reference representations.

Included Checkpoints

  • baseline_best.pt
  • complex_best.pt
  • align_best.pt
  • full_best.pt
  • run_config.json

The checkpoint names correspond to these training variants:

File Variant
baseline_best.pt Standard real-valued baseline
complex_best.pt Complex latent representation only
align_best.pt Complex latent representation with amplitude and phase alignment losses
full_best.pt Alignment losses plus coherence-gated gradient scaling

Training Setup

These checkpoints were produced from the MedleyDB sample experiment configuration stored in run_config.json.

Key settings:

  • dataset: medleydb_sample
  • batch size: 16
  • epochs: 20
  • learning rate: 1e-3
  • hidden dim: 128
  • embed dim: 64
  • sample rate: 44100
  • segment length: 2.0s
  • gate warmup: 5 epochs

The current evaluation protocol is same-track segment classification/learning, not held-out track or artist generalization.

Method Summary

Latent activations are modeled as complex signals:

hi=AieiΟ•i h_i = A_i e^{i\phi_i}

with coherence score:

C(x)=1nβˆ‘i∣hi∣∣ri∣cos⁑(Ο•iβˆ’Ο•r,i) C(x) = \frac{1}{n}\sum_i |h_i||r_i|\cos(\phi_i - \phi_{r,i})

and coherence-gated update:

gβ€²=Ξ±(x)g g' = \alpha(x) g

The full method combines:

  • complex latent representation
  • amplitude alignment loss
  • phase alignment loss
  • coherence-gated per-sample training signal

Loading

These files are raw PyTorch state dict checkpoints. Load them with the corresponding model definition from the project code:

  • baseline: BaselineModel
  • complex / align / full: PhaseModel

Example:

import torch

state = torch.load("full_best.pt", map_location="cpu")

You will also need the architecture definitions from the source repository:

  • phase_coherence_test.py
  • datasets.py

Limitations

  • Research prototype, not production-ready.
  • Checkpoints are architecture-specific PyTorch state dicts.
  • The current MedleyDB setup evaluates same-track learning only.
  • Results should be interpreted as exploratory evidence rather than a finalized benchmark.

Source

Project repository:

jzgdev/phase-induced-coherence-gated-gradient-descent

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support