File size: 4,624 Bytes
f4b4007
 
 
 
 
f23e3d7
f4b4007
 
 
 
0117fe7
f4b4007
98d2aa0
f4b4007
 
 
 
 
0117fe7
f4b4007
 
 
98d2aa0
f4b4007
98d2aa0
f4b4007
98d2aa0
 
f4b4007
98d2aa0
 
 
 
f4b4007
 
 
 
 
98d2aa0
 
f4b4007
98d2aa0
f4b4007
 
98d2aa0
f4b4007
98d2aa0
 
 
 
f4b4007
 
 
98d2aa0
36264ed
98d2aa0
 
 
 
 
36264ed
98d2aa0
 
 
f4b4007
98d2aa0
f4b4007
1a88b91
98d2aa0
f4b4007
 
 
 
 
 
 
 
 
 
98d2aa0
f4b4007
 
 
 
 
98d2aa0
0dcbc08
 
1a88b91
f4b4007
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fcd1e7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
tags:
- self-supervised-learning
- neuroimaging
- fMRI
- pytorch
---

# NeuroCLR

**NeuroCLR** is a self-supervised learning (SSL) framework for learning **robust, disorder-agnostic neural representations** from raw, unlabeled resting-state fMRI (rs-fMRI) regional time series. NeuroCLR is designed for **multi-site generalization** and **transfer** to downstream disorder classification with limited labeled data.

[[GitHub Repo](https://github.com/pcdslab/NeuroCLR)] | [[Cite](#citation)]

---

## Abstract

Self-supervised learning (SSL) is a powerful technique in computer vision for drastically reducing the dependency on large amounts of labeled training data. The availability of large-scale, unannotated, rs-fMRI data provides opportunities for the development of superior machine-learning models for classification of disorders across heterogeneous sites, and diverse subjects. In this paper, we propose NeuroCLR, a novel self-supervised learning (SSL) framework. NeuroCLR extracts robust and rich invariant neural representations - consistent across diverse experimental subjects and disorders - using contrastive principles, spatially constrained learning, and augmented views of unlabeled raw fMRI time series data. We pre-trained NeuroCLR using a combination of heterogeneous disorders from more than 3,600 participants across 44 different sites, and 720,000 region-specific time series fMRI data. The resultant disorder-agnostic pre-trained model is fine-tuned for downstream disorder-specific classification tasks on limited labelled data. We evaluate NeuroCLR on diverse disorder classification tasks and find that it outperforms both deep-learning, and SSL models that have been trained on a single disorder. Experiments also confirmed robust generalizability, consistently outperforming baselines across neuroimaging sites. This study is the first to present robust and reproducible self-supervised methodology with anatomically consistent contrastive objective that operates on raw unlabelled fMRI data, capable of reliable transfer across diagnostic categories. This will cultivate stronger participation by computational and clinical researchers, setting the stage for the development of sophisticated diagnostic models, for various neurodegenerative and neurodevelopmental disorders, leveraging NeuroCLR.

---

## Model Structure

This repository provides **two loadable model artifacts**:

- **Root model (default)**  
  Self-supervised **pretraining encoder + projector** (contrastive SSL)

- **`classification/` subfolder**  
  Encoder + **ResNet1D classification head** for downstream tasks

All models rely on **custom architectures**, so `trust_remote_code=True` is required.

---

## Model Details

### 1) Pretraining Model (Default, Loaded from Repo Root)

- **Input**: region-wise rs-fMRI time series  
  Shape: **`[B, 1, L]`**, where `L = 128` time points
- **Output**:
  - `h`: pooled representation, shape **`[B, 128]`**
  - `z`: projected representation, shape **`[B, projector_out_dim]`**

This model is intended for:
- representation learning
- feature extraction
- transfer learning

---

### 2) Classification Model (`classification/`)

- **Input**: ROI-by-time representation  
  Shape: **`[B, N_ROIs, 128]`** (e.g., `N_ROIs = 200`)
- **Output**:
  - `logits`: shape **`[B, num_labels]`**
  - `loss`: returned when labels are provided

> **Note**  
> The encoder is bundled with the classification model and may be frozen by default (recommended).  
> See the GitHub repository for training and fine-tuning scripts.

---

## Usage (PyTorch)
```python
import torch
from transformers import AutoModel

model = AutoModel.from_pretrained(
    "SaeedLab/NeuroCLR",
    trust_remote_code=True
)

model.eval()

x = torch.randn(4, 1, 128)  # [batch, 1, time_points]

with torch.no_grad():
    outputs = model(x)

print(outputs["h"].shape)  # [4, 128]
print(outputs["z"].shape)
```


### Load the Downstream Classification Model
```py
import torch
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(
    "SaeedLab/NeuroCLR",
    subfolder="classification",
    trust_remote_code=True
)

model.eval()

x = torch.randn(4, 200, 128)  # [batch, n_rois, embedding_dim]
labels = torch.tensor([0, 1, 0, 1])

with torch.no_grad():
    outputs = model(x, labels=labels)

print(outputs["logits"].shape)  # [4, 2]
print(outputs["loss"])
```

## Citation

The paper is under review. As soon as it is accepted, we will update this section.


## Contact

For any additional questions or comments, contact Fahad Saeed (fsaeed@fiu.edu).