update README
Browse files
README.md
CHANGED
|
@@ -1,3 +1,69 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DeepCSIv2
|
| 2 |
+
This is the implementation of the INFOCOM 25 Workshop's (DeepWireless 25) paper-- DeepCSIv2:[Radio Fingerprinting of Wi-Fi Devices Through
|
| 3 |
+
MIMO Compressed Channel Feedback](https://ieeexplore.ieee.org/abstract/document/11152893)
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
<br/>
|
| 7 |
+
|
| 8 |
+
### We present DeepCSIv2, a data-driven radio fingerprinting (RFP) algorithm to characterize Wi-Fi devices acting as stations (STAs) at the physical layer. DeepCSIv2 is based on a neural network architecture that automatically extracts the STA’s radio fingerprint from the feedback captured over the air and identifies the device.
|
| 9 |
+
<br/>
|
| 10 |
+
|
| 11 |
+
<p align="center">
|
| 12 |
+
<img src="Images/DeepCSI-overview.png"
|
| 13 |
+
alt="Markdown Monster icon"
|
| 14 |
+
style="float: center;" />
|
| 15 |
+
</p>
|
| 16 |
+
|
| 17 |
+
If you find the project useful and you use this code, please cite our paper:
|
| 18 |
+
<br/>
|
| 19 |
+
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
@inproceedings{meneghello2025radio,
|
| 23 |
+
title={Radio Fingerprinting of Wi-Fi Devices Through MIMO Compressed Channel Feedback},
|
| 24 |
+
author={Meneghello, Francesca and Haque, Khandaker Foysal and Restuccia, Francesco},
|
| 25 |
+
booktitle={IEEE INFOCOM 2025-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)},
|
| 26 |
+
pages={1--6},
|
| 27 |
+
year={2025},
|
| 28 |
+
organization={IEEE}
|
| 29 |
+
}
|
| 30 |
+
|
| 31 |
+
```
|
| 32 |
+
<br/>
|
| 33 |
+
<br/>
|
| 34 |
+
|
| 35 |
+
### Please [download the dataset](https://huggingface.co/datasets/foysalhaque/DeepCSIv2/tree/main) and keep in the ```DeepCSIv2/input_files``` directory. You can directly download the Vmatrices or download the raw traces and extract the Vmatrices. The extraction procedure is also provided.
|
| 36 |
+
|
| 37 |
+
### Extract the beamforming feedback with [Wi-BFI Tool](https://github.com/kfoysalhaque/Wi-BFI)
|
| 38 |
+
```bash
|
| 39 |
+
python main_extract_batch.py '<trace_dir>' '<standard>' '<MIMO>' '<Antenna-config>' '<BW>' '<packets to process>' '<vmatrices_dir>' '<bfa_dir>'
|
| 40 |
+
```
|
| 41 |
+
Example: python main_extract_batch.py /data/pcap_traces AX MU 4x2 80 5000 /data/output/vmatrices /data/output/bfa
|
| 42 |
+
|
| 43 |
+
### Beamforming feedback matrices dataset creation
|
| 44 |
+
The following script creates the datasets of beamforming feedback matrices.
|
| 45 |
+
```bash
|
| 46 |
+
python create_dataset.py <'directory of input data'> <'number of different positions'> <'maximum number of samples to consider'> <'prefix to identify the data'> <'folder to save the dataset'> <'select random indices (`rand`) or subsample the data (`sampling`)'>
|
| 47 |
+
```
|
| 48 |
+
e.g.,
|
| 49 |
+
- python create_dataset.py ./input_files/ 3 6000 _ ./dataset/ sampling
|
| 50 |
+
|
| 51 |
+
### Train the learning algorithm for fingerprinting and test the performance
|
| 52 |
+
The following script allows training and testing DeepCSIv2 on the scenarios identified in the reference paper based on the arguments passed as input.
|
| 53 |
+
The fingerprinting is beamformee-specific: one model has to be trained using the feedback matrices collected from each of the beamformees.
|
| 54 |
+
```bash
|
| 55 |
+
python learning.py <'directory of the beamforming feedback matrices dataset'> <'number of different positions'> <'name for the model to be saved'> <'number of transmitter antennas'> <'number of receiver antennas'> <'indices of the transmitter antennas to consider, comma separated'> <'indices of the receiver antennas to consider, comma separated'> <'bandwidth'> <'model type in {`convolutional`, `attention`}'> <'prefix to identify the data'> <'scenario considered in {S1, S2, S3, S4}'>
|
| 56 |
+
```
|
| 57 |
+
e.g.,
|
| 58 |
+
- python learning.py ./dataset/ 3 finger_ 4 2 0,1,2,3 0 160 attention _ S1
|
| 59 |
+
- python learning.py ./dataset/ 3 finger_ 4 2 0,1,2,3 0 160 attention_hyper_selection-128,128,128,128-7,7,7,5 _ hyper
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
### Test the performance of the algorithm already trained
|
| 63 |
+
```bash
|
| 64 |
+
python learning_test.py <'directory of the beamforming feedback matrices dataset'> <'number of different positions'> <'name for the model to be saved'> <'number of transmitter antennas'> <'number of receiver antennas'> <'indices of the transmitter antennas to consider, comma separated'> <'indices of the receiver antennas to consider, comma separated'> <'bandwidth'> <'model type in {`convolutional`, `attention`}'> <'prefix to identify the data'> <'scenario considered in {S1, S2, S3, S4}'>
|
| 65 |
+
```
|
| 66 |
+
e.g., python learning_test.py ./dataset/ 3 finger_rev_ 4 2 0,1,2,3 0 160 attention _ S1
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
#### For any question or query, please contact [Foysal Haque](https://kfoysalhaque.github.io/) at _**haque.k@northeastern.edu**
|