SID2000 commited on
Commit
3522a4a
·
verified ·
1 Parent(s): 9635a89

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -54
README.md CHANGED
@@ -1,9 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # CortexLab
2
 
3
  Enhanced multimodal fMRI brain encoding toolkit built on [Meta's TRIBE v2](https://github.com/facebookresearch/tribev2).
4
 
5
  CortexLab extends TRIBE v2 with streaming inference, interpretability tools, cross-subject adaptation, brain-alignment benchmarking, and cognitive load scoring.
6
 
 
 
 
 
7
  ## Features
8
 
9
  | Feature | Description |
@@ -17,22 +37,18 @@ CortexLab extends TRIBE v2 with streaming inference, interpretability tools, cro
17
 
18
  ## Prerequisites
19
 
20
- The pretrained TRIBE v2 model uses **LLaMA 3.2-3B** as its text encoder. You must accept Meta's LLaMA license before using it:
21
 
22
- 1. Visit [llama.meta.com](https://llama.meta.com/) and accept the license
23
  2. Request access on [HuggingFace](https://huggingface.co/meta-llama/Llama-3.2-3B)
24
  3. Authenticate: `huggingface-cli login`
25
 
26
  ## Installation
27
 
28
  ```bash
29
- pip install -e "."
30
-
31
- # With optional dependencies
32
- pip install -e ".[plotting]" # Brain visualization
33
- pip install -e ".[training]" # PyTorch Lightning training
34
- pip install -e ".[analysis]" # RSA/CKA benchmarking (scipy)
35
- pip install -e ".[dev]" # Testing and linting
36
  ```
37
 
38
  ## Quick Start
@@ -55,7 +71,6 @@ from cortexlab.analysis import BrainAlignmentBenchmark
55
  bench = BrainAlignmentBenchmark(brain_predictions, roi_indices=roi_indices)
56
  result = bench.score_model(clip_features, method="rsa")
57
  print(f"Alignment: {result.aggregate_score:.3f}")
58
- print(f"V1 alignment: {result.roi_scores['V1']:.3f}")
59
  ```
60
 
61
  ### Cognitive Load Scoring
@@ -66,40 +81,18 @@ from cortexlab.analysis import CognitiveLoadScorer
66
  scorer = CognitiveLoadScorer(roi_indices)
67
  result = scorer.score_predictions(predictions)
68
  print(f"Overall load: {result.overall_load:.2f}")
69
- print(f"Visual complexity: {result.visual_complexity:.2f}")
70
- print(f"Language processing: {result.language_processing:.2f}")
71
- ```
72
-
73
- ### Streaming Inference
74
-
75
- ```python
76
- from cortexlab.inference import StreamingPredictor
77
-
78
- sp = StreamingPredictor(model._model, window_trs=40, step_trs=1, device="cuda")
79
- for features in live_feature_stream():
80
- pred = sp.push_frame(features)
81
- if pred is not None:
82
- visualize(pred) # (n_vertices,)
83
- ```
84
-
85
- ### Modality Attribution
86
-
87
- ```python
88
- from cortexlab.inference import ModalityAttributor
89
-
90
- attributor = ModalityAttributor(model._model, roi_indices=roi_indices)
91
- scores = attributor.attribute(batch)
92
- # scores["text"], scores["audio"], scores["video"] -> (n_vertices,)
93
  ```
94
 
95
- ### Cross-Subject Adaptation
96
 
97
- ```python
98
- from cortexlab.core.subject import SubjectAdapter
 
 
 
 
99
 
100
- adapter = SubjectAdapter.from_ridge(model._model, calibration_loader, regularization=1e-3)
101
- new_subject_id = adapter.inject_into_model(model._model)
102
- ```
103
 
104
  ## Architecture
105
 
@@ -113,24 +106,26 @@ src/cortexlab/
113
  viz/ Brain surface visualization (nilearn, pyvista)
114
  ```
115
 
116
- ## Development
117
-
118
- ```bash
119
- pip install -e ".[dev]"
120
- pytest tests/ -v
121
- ruff check src/ tests/
122
- ```
123
-
124
  ## License
125
 
126
- CC BY-NC 4.0 (inherited from TRIBE v2). See [LICENSE](LICENSE) and [NOTICE](NOTICE).
127
 
128
- This project is for **non-commercial use only**. The pretrained weights are hosted by Meta at [facebook/tribev2](https://huggingface.co/facebook/tribev2) and are not redistributed by this project.
129
 
130
- ## Acknowledgements
131
 
132
- Built on [TRIBE v2](https://github.com/facebookresearch/tribev2) by Meta FAIR.
 
 
 
 
 
 
 
 
133
 
134
- > d'Ascoli et al., "A foundation model of vision, audition, and language for in-silico neuroscience", 2026.
135
 
136
- See [NOTICE](NOTICE) for full attribution and third-party licenses.
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: cortexlab
4
+ tags:
5
+ - neuroscience
6
+ - fmri
7
+ - brain-encoding
8
+ - multimodal
9
+ - tribe-v2
10
+ - brain-alignment
11
+ - cognitive-load
12
+ language:
13
+ - en
14
+ pipeline_tag: other
15
+ ---
16
+
17
  # CortexLab
18
 
19
  Enhanced multimodal fMRI brain encoding toolkit built on [Meta's TRIBE v2](https://github.com/facebookresearch/tribev2).
20
 
21
  CortexLab extends TRIBE v2 with streaming inference, interpretability tools, cross-subject adaptation, brain-alignment benchmarking, and cognitive load scoring.
22
 
23
+ ## What This Repo Contains
24
+
25
+ This is a **code-only** repository. It does not contain pretrained weights. The pretrained TRIBE v2 model is hosted by Meta at [`facebook/tribev2`](https://huggingface.co/facebook/tribev2).
26
+
27
  ## Features
28
 
29
  | Feature | Description |
 
37
 
38
  ## Prerequisites
39
 
40
+ The pretrained TRIBE v2 model uses **LLaMA 3.2-3B** as its text encoder. You must:
41
 
42
+ 1. Accept Meta's LLaMA license at [llama.meta.com](https://llama.meta.com/)
43
  2. Request access on [HuggingFace](https://huggingface.co/meta-llama/Llama-3.2-3B)
44
  3. Authenticate: `huggingface-cli login`
45
 
46
  ## Installation
47
 
48
  ```bash
49
+ git clone https://github.com/siddhant-rajhans/cortexlab.git
50
+ cd cortexlab
51
+ pip install -e ".[analysis]"
 
 
 
 
52
  ```
53
 
54
  ## Quick Start
 
71
  bench = BrainAlignmentBenchmark(brain_predictions, roi_indices=roi_indices)
72
  result = bench.score_model(clip_features, method="rsa")
73
  print(f"Alignment: {result.aggregate_score:.3f}")
 
74
  ```
75
 
76
  ### Cognitive Load Scoring
 
81
  scorer = CognitiveLoadScorer(roi_indices)
82
  result = scorer.score_predictions(predictions)
83
  print(f"Overall load: {result.overall_load:.2f}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ```
85
 
86
+ ## Compute Requirements
87
 
88
+ | Component | VRAM | Notes |
89
+ |---|---|---|
90
+ | TRIBE v2 encoder | ~1 GB | Small (1.15M params) |
91
+ | LLaMA 3.2-3B (text) | ~8 GB | Features cached after first run |
92
+ | V-JEPA2 (video) | ~6 GB | Features cached after first run |
93
+ | Wav2Vec-BERT (audio) | ~3 GB | Features cached after first run |
94
 
95
+ Minimum: 16 GB VRAM GPU for full inference. CPU works but is slow. Analysis tools (benchmark, cognitive load) work with zero GPU on precomputed predictions.
 
 
96
 
97
  ## Architecture
98
 
 
106
  viz/ Brain surface visualization (nilearn, pyvista)
107
  ```
108
 
 
 
 
 
 
 
 
 
109
  ## License
110
 
111
+ CC BY-NC 4.0 (non-commercial use only), inherited from TRIBE v2.
112
 
113
+ This project does not redistribute pretrained weights. Users must download weights directly from [`facebook/tribev2`](https://huggingface.co/facebook/tribev2).
114
 
115
+ ## Citation
116
 
117
+ If you use CortexLab in your research, please cite the original TRIBE v2 paper:
118
+
119
+ ```bibtex
120
+ @article{dascoli2026tribe,
121
+ title={A foundation model of vision, audition, and language for in-silico neuroscience},
122
+ author={d'Ascoli, St{\'e}phane and others},
123
+ year={2026}
124
+ }
125
+ ```
126
 
127
+ ## Links
128
 
129
+ - **GitHub**: [siddhant-rajhans/cortexlab](https://github.com/siddhant-rajhans/cortexlab)
130
+ - **TRIBE v2**: [facebookresearch/tribev2](https://github.com/facebookresearch/tribev2)
131
+ - **Pretrained weights**: [facebook/tribev2](https://huggingface.co/facebook/tribev2)