File size: 14,962 Bytes
eaaeb1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
# πŸ”¬ LWM Spectrogram Research Framework

**Core Research Focus**: Comprehensive research framework to validate Large Wireless Model (LWM) representation learning effectiveness on spectrogram data.

This project systematically compares and analyzes **performance differences between using and not using LWM**.

## πŸ—οΈ Research Framework Structure

### πŸ“‚ **3-Stage Research Pipeline**
```
pretraining/ β†’ downstream/ β†’ baselines/
     ↓             ↓             ↓
  LWM Training    Transfer Learning    Direct Learning
  (Representation)  (Embedding Usage)  (Raw Data)
```

### 🎯 **Core Research Questions**
- **LWM Representation Learning Effectiveness**: How helpful is pre-trained LWM for spectrogram understanding?
- **Transfer Learning Efficiency**: How much does LWM embedding improve downstream task performance?
- **Cost-Benefit Trade-off**: Is the performance improvement from LWM addition justified by the computational cost?

### πŸš€ **Key Features**
- **Comprehensive Comparative Analysis**: LWM usage vs non-usage performance comparison
- **Multiple Model Support**: ResNet, MobileNet, EfficientNet, SqueezeNet, etc.
- **Real-time Monitoring**: tqdm-based progress tracking and performance metrics
- **Automatic Visualization**: Training curves and performance metric graphs
- **Organized Result Storage**: Timestamp-based folder structure

## πŸ“‹ Requirements

```bash
Python 3.10+
PyTorch 2.0.0+
torchvision 0.15.0+
torchaudio 2.0.0+
CUDA 12.1+ (recommended for GPU training)
```

**Core Dependencies:**
- torch>=2.0.0, torchvision>=0.15.0, torchaudio>=2.0.0 (PyTorch ecosystem)
- numpy>=1.21.0,<2.0, scipy>=1.15.3, scikit-learn>=1.6.1 (scientific computing)
- matplotlib>=3.10.3, seaborn>=0.11.0 (visualization)
- tqdm>=4.67.1 (progress bars)
- DeepMIMO>=4.0.0b9 (wireless channel modeling)
- umap-learn>=0.5.7 (dimensionality reduction)

## πŸ› οΈ Installation

### Option 1: Recommended (pyproject.toml) - Standard Installation
```bash
git clone https://github.com/yourusername/lwm-spectro.git
cd lwm-spectro

# Install in editable mode (recommended for development)
pip install -e .

# Install DeepMIMO separately (important for compatibility)
pip install --no-deps "DeepMIMO>=4.0.0b9"

# Or install with development dependencies
pip install -e ".[dev]"

# For GPU support on CUDA systems
pip install -e ".[gpu]"
```

### Option 1.5: Lambda/AWS Optimized Installation
```bash
git clone https://github.com/yourusername/lwm-spectro.git
cd lwm-spectro

# Install with Lambda-compatible versions
pip install -e ".[lambda]"

# Install DeepMIMO separately (important for Lambda compatibility)
pip install --no-deps "DeepMIMO>=4.0.0b9"

# Verify installation
python -c "
import torch
import numpy as np
import matplotlib
import deepmimo as dm
print('βœ… Lambda installation successful!')
print(f'PyTorch: {torch.__version__}')
print(f'NumPy: {np.__version__}')
print(f'matplotlib: {matplotlib.__version__}')
print(f'DeepMIMO: {dm.__version__}')
"
```

### Option 2: Legacy (requirements.txt)
```bash
git clone https://github.com/yourusername/lwm-spectro.git
cd lwm-spectro

# Install from requirements.txt
pip install -r requirements.txt
```

### Option 3: Conda Environment
```bash
# If you have conda environment 'lwm' set up:
conda activate lwm
pip install -e .
```

### Server Environment Setup (NumPy 2.3 Compatible)
For remote servers with NumPy 2.3+ environment:
```bash
# Update system and install basic dependencies
sudo apt update && sudo apt install -y git python3.12 python3.12-venv

# Clone and install
git clone https://github.com/yourusername/lwm-spectro.git
cd lwm-spectro

# Create virtual environment with Python 3.12
python3.12 -m venv venv
source venv/bin/activate

# Install with server-compatible versions (NumPy 2.3+ support)
pip install -e .

# Or install from requirements.txt (updated for server compatibility)
pip install -r requirements.txt

# For CUDA support (if GPU available)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Verify NumPy version compatibility
python -c "
import numpy as np
import numba
print(f'βœ… NumPy version: {np.__version__}')
print(f'βœ… Numba version: {numba.__version__}')
print('βœ… Server environment ready!')
"
```

### Conda Environment Setup (Recommended for Server)
```bash
# Create conda environment from provided environment.yml
conda env create -f environment.yml
conda activate lwm-spectro-server

# Verify installation
python -c "
import torch
import numpy as np
import numba
import matplotlib
print('βœ… All dependencies installed successfully!')
print(f'PyTorch: {torch.__version__}')
print(f'NumPy: {np.__version__}')
print(f'Numba: {numba.__version__}')
print(f'Matplotlib: {matplotlib.__version__}')
"
```

### Troubleshooting NumPy Compatibility Issues
If you encounter NumPy/numba compatibility issues:
```bash
# Option 1: Update numba to latest version (supports NumPy 2.3)
pip install --upgrade numba

# Option 2: Downgrade NumPy if needed (not recommended for server)
pip install 'numpy>=1.21.0,<2.3.0'

# Option 3: Use conda environment (recommended)
conda env create -f environment.yml
conda activate lwm-spectro-server

# Option 4: Force reinstall with compatible versions
pip uninstall numpy numba -y
pip install 'numpy>=1.21.0,<2.4.0' 'numba>=0.59.0'
```

## 🎯 Usage

### Basic Training

Train with all available spectrogram sizes:
```bash
python mcs_classification.py
```

### Specify Spectrogram Size

Train with 32x32 spectrograms only:
```bash
python mcs_classification.py 32x32
```

Train with 128x128 spectrograms only:
```bash
python mcs_classification.py 128x128
```

### Choose Model Architecture

Use different model architectures:
```bash
# Custom lightweight CNN (default)
python mcs_classification.py 32x32 --model custom

# ResNet18 (balanced performance)
python mcs_classification.py 32x32 --model resnet18

# MobileNetV2 (lightweight mobile-optimized)
python mcs_classification.py 32x32 --model mobilenet_v2

# EfficientNet-B0 (efficient architecture)
python mcs_classification.py 32x32 --model efficientnet_b0

# SqueezeNet (minimal memory usage)
python mcs_classification.py 32x32 --model squeezenet
```

## πŸ“Š Data Structure

The project uses spectrogram data stored in pickle files:

- **32x32 Spectrograms**: Generated from 128FFT (smaller size, faster training)
- **128x128 Spectrograms**: Generated from 512FFT (larger size, more features)

Each pickle file contains:
```python
{
    'spectrograms': numpy.ndarray,  # Shape: (1000, H, W)
    'configuration': dict          # Metadata
}
```

## 🏷️ MCS Classes (7 Classes)

| MCS Level | Modulation | Code Rate | Description |
|-----------|------------|-----------|-------------|
| 0 | QPSK | 1/2 | Basic QPSK with 1/2 code rate |
| 1 | QPSK | 3/4 | QPSK with 3/4 code rate |
| 2 | QAM16 | 1/2 | 16-QAM with 1/2 code rate |
| 3 | QAM16 | 3/4 | 16-QAM with 3/4 code rate |
| 4 | QAM64 | 1/2 | 64-QAM with 1/2 code rate |
| 5 | QAM64 | 2/3 | 64-QAM with 2/3 code rate |
| 6 | QAM64 | 3/4 | 64-QAM with 3/4 code rate |

## πŸ“ Output Files

Training automatically creates organized output in `models/[model_type]_[size]/`:

### Model Files
- `best.pth` - Best performing model (highest F1 score)
- `latest.pth` - Most recent model checkpoint

### Configuration & Logs
- `config.json` - Complete training configuration and dataset info
- `training_history.json` - Epoch-by-epoch metrics (loss, F1 scores, accuracy)
- `model_summary.txt` - Model architecture and parameter count
- `performance_metrics.json` - Final evaluation results

### Visualizations
- `training_curves.png` - Combined loss, F1, and accuracy curves
- `loss_curves.png` - Loss curves only
- `f1_progression.png` - F1 score progression over epochs

## πŸ”§ Model Architectures

### Custom CNN
- Lightweight custom convolutional neural network
- Optimized for speed and minimal resource usage
- Perfect for quick experiments and Apple Silicon

### ResNet (18/34)
- Residual Network architectures
- Excellent balance of performance and computational efficiency
- Pre-trained weights available

### MobileNet (V2/V3)
- Mobile-optimized architectures
- Extremely efficient for resource-constrained environments
- V3 includes neural architecture search optimizations

### EfficientNet-B0
- Compound scaling for optimal efficiency
- Balances network depth, width, and resolution
- State-of-the-art efficiency

### SqueezeNet
- Extreme compression with minimal parameter count
- Maintains accuracy with dramatically reduced model size
- Ideal for deployment on edge devices

## πŸŽ›οΈ Configuration Options

### Training Parameters
- `batch_size`: Batch size for training (default: 32)
- `learning_rate`: Initial learning rate (default: 0.001)
- `num_epochs`: Maximum training epochs (default: 50)
- `patience_f1`: Epochs to wait for F1 improvement (default: 10)
- `patience_loss`: Epochs to wait for loss decrease (default: 5)

### Early Stopping Criteria
- F1 score improvement threshold (default: 0.001)
- Validation loss stagnation detection
- Configurable patience parameters

## πŸ“ˆ Real-time Monitoring

The training process provides live updates:

```
πŸš€ MCS Classification Training (Size: 32x32, Model: custom)
πŸ“Š Dataset Info:
   Total samples: 147,000
   Training samples: 117,600
   Validation samples: 29,400
   Classes: 7 (QPSK 1/2, QPSK 3/4, QAM16 1/2, QAM16 3/4, QAM64 1/2, QAM64 2/3, QAM64 3/4)

πŸ—οΈ Model Info:
   Type: custom
   Total parameters: 619,015
   Trainable parameters: 619,015

πŸƒ Training with Smart Early Stopping...
Epoch 1/50 - Train Loss: 1.234, Val Loss: 0.987, F1: 0.654, F1 Improvement: 0.654
Epoch 2/50 - Train Loss: 0.876, Val Loss: 0.765, F1: 0.712, F1 Improvement: 0.058
...
```

## 🎯 Evaluation Metrics

The system evaluates using comprehensive F1 metrics:

- **F1 Macro**: Unweighted mean of per-class F1 scores
- **F1 Micro**: Global F1 score (accounts for class imbalance)
- **F1 Weighted**: Weighted mean by class support
- **Per-class F1**: Individual F1 scores for each MCS class
- **Accuracy**: Overall classification accuracy

## πŸ” Project Structure

```
lwm-spectro/
β”œβ”€β”€ __init__.py              # Package initialization
β”œβ”€β”€ pyproject.toml           # Modern Python packaging (recommended)
β”œβ”€β”€ requirements.txt         # Legacy dependency management
β”œβ”€β”€ README.md                # This documentation
β”œβ”€β”€ .gitignore              # Git ignore rules
β”œβ”€β”€ lwm_spectrogram_adapter.py  # Main LWM adapter script
β”œβ”€β”€ mcs_classification.py   # MCS classification training
β”œβ”€β”€ mcs_classifier_lwm.py   # LWM-based MCS classifier
β”œβ”€β”€ train_32x32.py         # 32x32 model training
β”œβ”€β”€ train_128x128.py       # 128x128 model training
β”œβ”€β”€ utils.py               # Utility functions
β”œβ”€β”€ pretrained_model.py    # Pretrained model definitions
β”œβ”€β”€ train_heads_config.py  # Training configuration
β”œβ”€β”€ check_cuda.py         # CUDA availability checker
β”œβ”€β”€ models/               # Auto-generated model outputs
β”‚   β”œβ”€β”€ custom_32x32/    # Model-specific folders
β”‚   β”œβ”€β”€ resnet18_32x32/
β”‚   └── ...
β”œβ”€β”€ spectrograms/         # Spectrogram data directory
β”‚   └── city_0_newyork/  # Dataset (not in repo)
β”‚       β”œβ”€β”€ LTE/
β”‚       β”œβ”€β”€ QPSK/
β”‚       β”œβ”€β”€ QAM16/
β”‚       └── QAM64/
```

## πŸš€ Research Workflow Execution Guide

### 1️⃣ **Step 1: LWM Pretraining** (Representation Learning)
```bash
# Pre-train LWM model with spectrogram data
cd pretraining
python lwm_spectrogram_adapter.py --max-samples-per-size 1000

# Results: LWM checkpoints saved in models/ folder
```

### 2️⃣ **Step 2: Downstream Tasks** (Transfer Learning)
```bash
# Classification using pre-trained LWM embeddings
cd downstream
python mcs_classification.py --model-type resnet18 --target-size 32x32

# Compare performance with different models
python mcs_classification.py --model-type efficientnet_b0 --target-size 128x128
```

### 3️⃣ **Step 3: Baselines** (Direct Learning Comparison)
```bash
# Raw spectrogram classification without LWM
cd baselines
python baseline_models_training.py --target-size 32x32
```

### πŸ”„ **Complete Comparative Workflow**
```bash
# 1. Measure baseline performance
cd baselines && python baseline_models_training.py

# 2. LWM pre-training
cd ../pretraining && python lwm_spectrogram_adapter.py

# 3. Measure LWM-enhanced performance
cd ../downstream && python mcs_classification.py --model-type resnet18

# 4. Compare and analyze results
```

## πŸ“Š Research Methodology

### **Evaluation Metrics**
- **F1 Score**: Macro, Micro, Weighted averages
- **Accuracy**: Overall classification accuracy
- **Training Efficiency**: Convergence speed, memory usage comparison

### **Experimental Design**
- **Cross-validation**: Compare LWM usage vs non-usage with identical data
- **Model Diversity**: Validate LWM effectiveness across different architectures
- **Data Size Comparison**: Compare 32x32 vs 128x128 spectrograms

### **Key Features**
- **Apple Silicon Optimization**: MPS support for efficient training on Mac
- **Single-channel Input Optimization**: All models specialized for spectrograms
- **Automatic Result Storage**: Timestamp-based organized result management
- **Real-time Monitoring**: tqdm-based progress tracking
- **Automated Visualization**: Training curves and performance metric graphs

## 🌐 Hugging Face Model Hub

This project is available on Hugging Face for easy access and deployment!

### πŸ“¦ Model Card & Documentation

We've prepared comprehensive documentation for Hugging Face Hub:

- **MODEL_CARD.md**: Complete model description, architecture, usage examples, and results
- **example_inference.py**: Ready-to-use inference scripts with visualization
- **HUGGINGFACE_GUIDE.md**: Step-by-step upload guide

### πŸš€ Quick Upload to Hugging Face

```bash
# Method 1: Using Bash script (recommended)
./upload_to_huggingface.sh

# Method 2: Using Python script
python upload_to_huggingface.py --username YOUR_USERNAME --upload-type minimal
```

Upload types:
- `minimal`: README + MoE checkpoint only
- `models`: All trained models
- `full`: Entire codebase

### πŸ“š Documentation Files

| File | Description |
|------|-------------|
| `MODEL_CARD.md` | Main Hugging Face model card (README) |
| `HUGGINGFACE_GUIDE.md` | Detailed upload instructions |
| `HUGGINGFACE_SUMMARY.md` | Quick reference guide |
| `example_inference.py` | Inference code examples |
| `upload_to_huggingface.sh` | Automated upload script (Bash) |
| `upload_to_huggingface.py` | Automated upload script (Python) |
| `.gitattributes` | Git LFS configuration |

For more details, see **[HUGGINGFACE_SUMMARY.md](HUGGINGFACE_SUMMARY.md)**.

## 🀝 Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

## πŸ“„ License

This project is open source. Please check the license file for details.