Spaces:
Sleeping
Sleeping
File size: 2,717 Bytes
eaa4e8e dbe81c1 eaa4e8e ef677f1 dbe81c1 ef677f1 1a5b440 f91cacf ef677f1 1a5b440 dbe81c1 ef677f1 1a5b440 dbe81c1 1a5b440 ef677f1 dbe81c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
title: Aphasia fMRI VAE Analysis
emoji: 🧠
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 5.20.1
app_file: app.py
pinned: false
---
# Aphasia fMRI to FC Analysis using VAE
This demo performs functional connectivity analysis on fMRI data using a Variational Autoencoder (VAE) approach. It's designed to work with aphasia patient data, analyzing brain connectivity patterns and their relationship to demographic variables.
## About the Model
This application implements a VAE model that:
1. Takes functional connectivity (FC) matrices derived from fMRI data
2. Learns a lower-dimensional latent representation of brain connectivity
3. Conditions the generation process on demographic variables (age, sex, time post-stroke, WAB scores)
4. Allows analysis of relationships between brain connectivity patterns and demographic variables
## Dataset
This demo uses the [SreekarB/OSFData](https://huggingface.co/datasets/SreekarB/OSFData) dataset from HuggingFace, which contains:
- NIfTI files in P01_rs.nii format containing fMRI data
- Demographic information directly in the dataset:
- ID: Subject identifier
- wab_aq: Aphasia quotient score (severity measure)
- age: Subject age
- mpo: Months post onset
- education: Years of education
- gender: Subject gender
- handedness: Subject handedness (ignored in this analysis)
The application processes the NIfTI files using the Power 264 atlas to create functional connectivity matrices that are then analyzed by the VAE model.
## How to Use
1. **Configure Parameters**:
- **Data Source**: By default, it uses the SreekarB/OSFData HuggingFace dataset
- **Latent Dimensions**: Controls the size of the latent space (default: 32)
- **Number of Epochs**: Training iterations (default: 200 for demo)
- **Batch Size**: Training batch size (default: 16)
2. **Start Training**:
- Click the "Start Training" button to begin the analysis
- The training progress will be displayed in the Status area
3. **View Results**:
- The VAE will learn latent representations of brain connectivity
- Results will show correlations between demographic variables and latent brain patterns
- The visualization shows original FC, reconstructed FC, and a new FC matrix generated from specific demographic values
## Outputs
The application produces visualizations showing:
- Original FC matrix
- Reconstructed FC matrix
- Generated FC matrix (based on specific demographic inputs)
- Correlation plots between latent variables and demographic features
## Technical Details
- Framework: PyTorch
- Interface: Gradio
- Dataset: HuggingFace Datasets API
- Analysis: Custom implementation of conditional VAE with demographic conditioning |