Papers
arxiv:2512.20929

Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence

Published on Dec 24, 2025
Authors:
,
,
,

Abstract

A machine learning framework decodes neural responses to visual language stimuli in Deaf signers, identifying frequency-specific neural signatures that distinguish linguistic from non-linguistic input through spatiotemporal analysis and entropy-based feature selection.

AI-generated summary

Human language processing relies on the brain's capacity for predictive inference. We present a machine learning framework for decoding neural (EEG) responses to dynamic visual language stimuli in Deaf signers. Using coherence between neural signals and optical flow-derived motion features, we construct spatiotemporal representations of predictive neural dynamics. Through entropy-based feature selection, we identify frequency-specific neural signatures that differentiate interpretable linguistic input from linguistically disrupted (time-reversed) stimuli. Our results reveal distributed left-hemispheric and frontal low-frequency coherence as key features in language comprehension, with experience-dependent neural signatures correlating with age. This work demonstrates a novel multimodal approach for probing experience-driven generative models of perception in the brain.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2512.20929
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.20929 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.20929 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.