saann commited on
Commit
dd9ce7e
Β·
1 Parent(s): 1bb190e

fix: add HF space config to README

Browse files
Files changed (1) hide show
  1. README.md +8 -203
README.md CHANGED
@@ -1,206 +1,11 @@
1
- # NeuroScreen 🧠
2
-
3
- > Vision-based early screening tool for Autism Spectrum Disorder (ASD) and ADHD symptoms.
4
- > Built for Octopi.Health assignment β€” Sanjay S, April 2026.
5
-
6
- ---
7
-
8
- ## What it does
9
-
10
- NeuroScreen analyzes images, videos, or live webcam captures and produces a clinical-style screening report covering:
11
-
12
- - **ASD risk score** (0–100) based on eye contact, facial expression, social smile, and detected behavioral patterns
13
- - **ADHD risk score** (0–100) based on head movement, gaze span, blink rate, body fidgeting, and optical flow
14
- - **Grad-CAM heatmap** showing which facial regions influenced the CNN prediction
15
- - **Signal breakdown** comparing 8 extracted behavioral signals against clinical reference ranges
16
- - **Plain-language clinical report** with findings and recommendation
17
-
18
- > βš•οΈ This is a **screening tool only** β€” not a medical diagnosis. Always consult a licensed specialist.
19
-
20
- ---
21
-
22
- ## Architecture
23
-
24
- ```
25
- User input (image / video / webcam)
26
- ↓
27
- FastAPI (main.py)
28
- ↓
29
- Orchestrator (analyzer.py)
30
- β”œβ”€β”€ Layer 1: MediaPipe signals (mediapipe_signals.py)
31
- β”‚ eye contact, blink rate, expression range,
32
- β”‚ head movement, gaze span, fidget score,
33
- β”‚ smile score, optical flow
34
- β”œβ”€β”€ Layer 2: CNN classifier (cnn_classifier.py)
35
- β”‚ EfficientNetB0 β†’ 9-class softmax β†’ Grad-CAM
36
- └── Layer 3: Clinical scoring (scoring.py)
37
- ASD score + ADHD score + risk level + report
38
- ↓
39
- JSON report β†’ frontend (index.html)
40
- ```
41
-
42
- ---
43
-
44
- ## Project structure
45
-
46
- ```
47
- neuroscreen/
48
- β”œβ”€β”€ main.py # FastAPI app β€” all endpoints
49
- β”œβ”€β”€ app/
50
- β”‚ β”œβ”€β”€ __init__.py
51
- β”‚ β”œβ”€β”€ analyzer.py # Orchestration layer
52
- β”‚ β”œβ”€β”€ mediapipe_signals.py # 8 behavioral signal extraction
53
- β”‚ β”œβ”€β”€ cnn_classifier.py # EfficientNetB0 + Grad-CAM
54
- β”‚ └── scoring.py # Clinical risk scoring
55
- β”œβ”€β”€ model/
56
- β”‚ β”œβ”€β”€ train.py # Training script
57
- β”‚ β”œβ”€β”€ class_names.json # 9-class labels
58
- β”‚ β”œβ”€β”€ best_model.weights.h5 # Best checkpoint
59
- β”‚ └── neuroscreen_model.weights.h5 # Final weights
60
- β”œβ”€β”€ dataset/
61
- β”‚ β”œβ”€β”€ train/ # COCO-format annotations
62
- β”‚ β”œβ”€β”€ valid/
63
- β”‚ └── test/
64
- β”œβ”€β”€ templates/
65
- β”‚ └── index.html # Frontend UI
66
- β”œβ”€β”€ static/ # Static assets
67
- β”œβ”€β”€ uploads/ # Temp upload dir (auto-cleaned)
68
- β”œβ”€β”€ requirements.txt
69
- β”œβ”€β”€ Dockerfile
70
- └── README.md
71
- ```
72
-
73
  ---
74
-
75
- ## Setup & run locally
76
-
77
- ### 1. Clone and create virtual environment
78
-
79
- ```bash
80
- git clone <your-repo-url>
81
- cd neuroscreen
82
- python3 -m venv venv
83
- source venv/bin/activate
84
- ```
85
-
86
- ### 2. Install dependencies
87
-
88
- ```bash
89
- pip install -r requirements.txt
90
- ```
91
-
92
- ### 3. Train the model (or use pre-trained weights)
93
-
94
- ```bash
95
- python model/train.py
96
- ```
97
-
98
- Training uses a proper 70/15/15 stratified split. Expect ~85%+ validation accuracy after both phases.
99
-
100
- ### 4. Run the server
101
-
102
- ```bash
103
- uvicorn main:app --host 0.0.0.0 --port 8000 --reload
104
- ```
105
-
106
- Visit: [http://localhost:8000](http://localhost:8000)
107
-
108
  ---
109
 
110
- ## API endpoints
111
-
112
- | Method | Path | Description |
113
- |--------|------|-------------|
114
- | GET | `/` | Frontend UI |
115
- | POST | `/analyze/image` | 1 or more images (multipart `files`) |
116
- | POST | `/analyze/video` | Single video file (multipart `file`) |
117
- | POST | `/analyze/webcam` | JSON `{ "frames": ["base64...", ...] }` |
118
- | POST | `/analyze/combined` | Images + video together |
119
- | GET | `/health` | Health check |
120
-
121
- ---
122
-
123
- ## Model details
124
-
125
- | Property | Value |
126
- |----------|-------|
127
- | Architecture | EfficientNetB0 + custom head |
128
- | Input size | 224 Γ— 224 Γ— 3 |
129
- | Classes | 9 (6 ASD + 1 ADHD + 1 social + 1 normal) |
130
- | Total params | 4.4M (1.39M trainable) |
131
- | Training | 2-phase: frozen backbone β†’ fine-tune last 50 layers |
132
- | Augmentation | Flip, rotation, zoom, brightness, contrast, hue, saturation |
133
- | Loss | Sparse categorical cross-entropy with label smoothing (0.1) |
134
- | Regularization | L2 (1e-4) + Dropout (0.5 / 0.4) + class weighting |
135
-
136
- ### 9 symptom classes
137
-
138
- | Class | Condition |
139
- |-------|-----------|
140
- | `avoid_eye_contact` | ASD |
141
- | `hand_flapping` | ASD |
142
- | `finger_flapping` | ASD |
143
- | `spinning` | ASD |
144
- | `stimming` | ASD |
145
- | `rocking` | ASD |
146
- | `continuous_moving` | ADHD |
147
- | `lack_social_skill` | ASD / Social |
148
- | `normal` | Control |
149
-
150
- ---
151
-
152
- ## Behavioral signals (MediaPipe layer)
153
-
154
- | Signal | Clinical relevance |
155
- |--------|--------------------|
156
- | Eye contact duration | Core ASD indicator |
157
- | Blink rate | ADHD attention marker |
158
- | Expression range | ASD social communication |
159
- | Head movement frequency | ADHD hyperactivity |
160
- | Attention gaze span | ADHD inattention |
161
- | Body fidgeting | ADHD hyperactivity |
162
- | Social smile score | ASD emotional reciprocity |
163
- | Optical flow | ADHD overall movement |
164
-
165
- ---
166
-
167
- ## Docker
168
-
169
- ```bash
170
- docker build -t neuroscreen .
171
- docker run -p 8000:8000 neuroscreen
172
- ```
173
-
174
- ---
175
-
176
- ## Requirements
177
-
178
- ```
179
- fastapi
180
- uvicorn[standard]
181
- tensorflow>=2.13
182
- mediapipe
183
- opencv-python-headless
184
- numpy
185
- Pillow
186
- scikit-learn
187
- matplotlib
188
- python-multipart
189
- jinja2
190
- ```
191
-
192
- ---
193
-
194
- ## Known limitations & future work
195
-
196
- - Webcam analysis requires a stable internet connection (frames sent to backend as base64)
197
- - Model accuracy improves significantly with more diverse training data
198
- - Currently CPU-only; add GPU support for faster video analysis
199
- - Future: real-time streaming analysis via WebSockets
200
- - Future: multi-face detection for classroom scenarios
201
-
202
- ---
203
-
204
- ## Disclaimer
205
-
206
- NeuroScreen is a **research and screening tool**. It is not a certified medical device and should never be used as a substitute for professional clinical assessment by a licensed developmental pediatrician or child psychologist.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Neuroscreenn
3
+ emoji: 🧠
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ pinned: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
+ # NeuroScreen
11
+ CNN-based neural screen analyzer using MediaPipe and TensorFlow.