Papers
arxiv:2404.09256

GPT2MEG: Quantizing MEG for Autoregressive Generation

Published on Apr 14, 2024
Authors:
,
,

Abstract

A GPT-2-style Transformer model adapted with quantization and embedding schemes successfully generates realistic MEG neural time series with better statistical fidelity than WaveNet variants and linear baselines.

AI-generated summary

Foundation models trained with self-supervised objectives are increasingly applied to brain recordings, but autoregressive generation of realistic multichannel neural time series remains comparatively underexplored, particularly for Magnetoencephalography (MEG). We study (i) modified multichannel WaveNet variants and (ii) a GPT-2-style Transformer, autoregressively trained by next-step prediction on unlabelled MEG. For the Transformer, we propose a simple quantization/tokenization and embedding scheme (channel, subject, and task-condition embeddings) that repurposes a language-model architecture for continuous, high-rate multichannel time series and enables conditional simulation of task-evoked activity. Across forecasting, long-horizon generation, and downstream decoding, GPT2MEG more faithfully reproduces temporal, spectral, and task-evoked statistics of real MEG than WaveNet variants and linear autoregressive baselines, and scales to multiple subjects via subject embeddings. Code available at https://github.com/ricsinaruto/MEG-transfer-decoding.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.09256 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.09256 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.09256 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.