Datasets:
Delete readme.md
Browse files
readme.md
DELETED
|
@@ -1,252 +0,0 @@
|
|
| 1 |
-
<h1 style="text-align:center;"> The RFUAV DATASET </h1>
|
| 2 |
-
|
| 3 |
-
## Abstract
|
| 4 |
-
|
| 5 |
-
The official repository for our paper, *"[RFUAV: A Benchmark Dataset for Unmanned Aerial Vehicle Detection and Identification](https://arxiv.org/abs/2503.09033)"*, can be accessed here. RFUAV offers a comprehensive benchmark dataset for Radio-Frequency (RF)-based drone detection and identification.
|
| 6 |
-
|
| 7 |
-

|
| 8 |
-
|
| 9 |
-
In addition to the dataset, we provide the raw data used to generate the spectral information, which includes recordings from 35 different types of drones under high signal-to-noise ratio (SNR) conditions. This dataset is available to all researchers working with RF data for drone analysis. Researchers can apply the deep learning methods we have provided, or use traditional signal processing techniques such as decoding, demodulation, and FFT.
|
| 10 |
-
|
| 11 |
-
Detailed information about the dataset, including file sizes (total data volume for each drone), SNR (the highest SNR for each dataset), and the middle frequency (the central frequency used during data collection for each drone), is provided in the figure below.
|
| 12 |
-
|
| 13 |
-

|
| 14 |
-
|
| 15 |
-
We analyzed the properties of each drone in the dataset, including: Frequency Hopping Signal Bandwidth (FHSBW), Frequency Hopping Signal Duration Time (FHSDT), Video Transmitted Signal Bandwidth (VSBW), Frequency Hopping Signal Duty Cycle (FHSDC), and Frequency Hopping Signal Pattern Period (FHSPP). The distributions of these properties are plotted below. More detailed information can be found in our paper.
|
| 16 |
-
|
| 17 |
-

|
| 18 |
-
|
| 19 |
-
With the RFUAV you can achieve drone signal detect and drone identification on the raw IQ data directly like this:
|
| 20 |
-
<div style="text-align:center;">
|
| 21 |
-

|
| 22 |
-
</div>
|
| 23 |
-
|
| 24 |
-
## 1.Quick start
|
| 25 |
-
<details>
|
| 26 |
-
<summary>Installtion</summary>
|
| 27 |
-
|
| 28 |
-
pip install -r requirements.txt
|
| 29 |
-
|
| 30 |
-
</details>
|
| 31 |
-
|
| 32 |
-
<details>
|
| 33 |
-
<summary>Classify the drone with utils.benchmark </summary>
|
| 34 |
-
|
| 35 |
-
python inference.py
|
| 36 |
-
|
| 37 |
-
</details>
|
| 38 |
-
<details>
|
| 39 |
-
<summary>Quick train using resnet50 on a small dataset </summary>
|
| 40 |
-
|
| 41 |
-
python train.py
|
| 42 |
-
|
| 43 |
-
</details>
|
| 44 |
-
|
| 45 |
-
## 2.Usage
|
| 46 |
-
|
| 47 |
-
### SDR Playback
|
| 48 |
-
Since our data was directly collected using USRP devices, it is fully compatible with USRP and GNU Radio for signal replay. You can use our raw data to broadcast signals through radio equipment to achieve your desired outcomes. Additionally, we provide the replay results observed during our experiments using an oscilloscope for reference.
|
| 49 |
-
<div style="text-align:center;">
|
| 50 |
-

|
| 51 |
-
</div>
|
| 52 |
-
|
| 53 |
-
### 2.1 How to transform the raw frequency signal data into the spectromgram
|
| 54 |
-
#### Python Pipeline
|
| 55 |
-
We provide a signal processing pipeline to convert the binary raw frequency signal data into spectrogram format using both MATLAB toolbox and Python.
|
| 56 |
-
You can easily use the provided code to visualize the spectrogram of a specific data pack. Note that the argument `'oneside'` controls whether to display the half-plane or full-plane spectrogram.
|
| 57 |
-
|
| 58 |
-
from utils.benchmark import RawDataProcessor
|
| 59 |
-
|
| 60 |
-
datapack = 'Your datapack path'
|
| 61 |
-
test = RawDataProcessor()
|
| 62 |
-
test.ShowSpectrogram(data_path=datapack,
|
| 63 |
-
drone_name='DJ FPV COMBO',
|
| 64 |
-
sample_rate=100e6,
|
| 65 |
-
stft_point=2048,
|
| 66 |
-
duration_time=0.1,
|
| 67 |
-
oneside=False,
|
| 68 |
-
Middle_Frequency=2400e6)
|
| 69 |
-
|
| 70 |
-
You can use the following code to automatically convert the raw frequency signal data into spectrograms and save them as .png images.
|
| 71 |
-
|
| 72 |
-
from utils.benchmark import RawDataProcessor
|
| 73 |
-
|
| 74 |
-
data_path = 'Your datapack path'
|
| 75 |
-
save_path = 'Your save path'
|
| 76 |
-
simple_rate = 100e6
|
| 77 |
-
test = RawDataProcessor()
|
| 78 |
-
test.DrawandSave(fig_save_path=save_path, file_path=data_path, fs=sample_rate,
|
| 79 |
-
stft_point=1024, duration_time=0.1)
|
| 80 |
-
|
| 81 |
-
You can use the `graphic.RawdataProcessor.save_as_video()` function to save the spectrogram as a video. The video format allows for a better observation of...
|
| 82 |
-
|
| 83 |
-
data_path = 'Your datapack path'
|
| 84 |
-
save_path = 'Your save path'
|
| 85 |
-
save_as_video(datapack=datapack,
|
| 86 |
-
save_path=save_path,
|
| 87 |
-
fs=100e6,
|
| 88 |
-
stft_point=1024,
|
| 89 |
-
duration_time=0.1,
|
| 90 |
-
fps=5,)
|
| 91 |
-
|
| 92 |
-
You can use the `graphic.RawdataProcessor.waterfall_spectrogram()` function to convert the raw data into a spectrogram waterfall video, as shown above. The waterfall video visually displays how the signal evolves over time in the original data.
|
| 93 |
-
|
| 94 |
-
data_path = 'Your datapack path'
|
| 95 |
-
save_path = 'Your save path'
|
| 96 |
-
waterfull_video(datapack=datapack,
|
| 97 |
-
save_path=save_path,
|
| 98 |
-
fs=100e6,
|
| 99 |
-
stft_point=1024,
|
| 100 |
-
duration_time=0.1,
|
| 101 |
-
fps=5,)
|
| 102 |
-
|
| 103 |
-
#### Matlab Pipeline
|
| 104 |
-
|
| 105 |
-
You can simply use the `check.m` program to check the spectromgram of a specialized data pack.
|
| 106 |
-
```MATLAB
|
| 107 |
-
data_path = 'Your datapack path';
|
| 108 |
-
nfft = 512;
|
| 109 |
-
fs = 100e6;
|
| 110 |
-
duration_time= 0.1;
|
| 111 |
-
datatype = 'float32';
|
| 112 |
-
check(data_path,nfft ,fs,duration_time,datatype);
|
| 113 |
-
```
|
| 114 |
-
|
| 115 |
-
### 2.2 How to estimate the SNR
|
| 116 |
-
|
| 117 |
-
We provide an SNR estimation method using the MATLAB toolbox to help you better analyze the binary raw frequency signal data. Additionally, we offer an SNR adjustment method to add noise to the raw data and adjust it to different SNR levels.
|
| 118 |
-
|
| 119 |
-
[idx1,idx2,idx3,idx4,f1, f2] = positionFind(dataIQ, fs, bw, NFFT);
|
| 120 |
-
snr_esti = snrEsti(dataIQ,fs,NFFT,f1,f2,idx1,idx2,idx3,idx4);
|
| 121 |
-
|
| 122 |
-
We provide an `awgn1` function to adjust the noise level of the raw signal data based on the SNR estimation results mentioned above. The signal-to-noise ratio can be adjusted between -20 dB and 20 dB, with a default step size of 2 dB. You can also define a custom scale if needed.
|
| 123 |
-
|
| 124 |
-
### 2.3 How to train a custom drone classification model and use it
|
| 125 |
-
|
| 126 |
-
We provide custom training code for drone identification tasks based on the PyTorch framework. Currently supported models include [ViT](https://arxiv.org/abs/2010.11929), [ResNet](https://arxiv.org/abs/1512.03385), [MobileNet](https://arxiv.org/abs/1704.04861), and [Swin Transformer](https://arxiv.org/abs/2103.14030). You can also customize your own model using the code in `utils.model.base`.
|
| 127 |
-
|
| 128 |
-
To customize the training, you can create or modify a configuration file with the `.yaml` extension and specify its path in the training code. Additionally, you can adjust the arguments in `utils.trainer.CustomTrainer()` to achieve the desired training setup.
|
| 129 |
-
|
| 130 |
-
from utils.trainer import CustomTrainer
|
| 131 |
-
|
| 132 |
-
CustomTrainer(
|
| 133 |
-
model='resnet50',
|
| 134 |
-
train_data_path='Your train data path',
|
| 135 |
-
val_data_path='Your val data path',
|
| 136 |
-
num_classes=23,
|
| 137 |
-
save_path='Your save path',
|
| 138 |
-
weights_path='Your weights path',
|
| 139 |
-
device='cuda:0',
|
| 140 |
-
batch_size=32,
|
| 141 |
-
shuffle=True,
|
| 142 |
-
image_size=224,
|
| 143 |
-
lr=0.0001,).train()
|
| 144 |
-
|
| 145 |
-
We provide a pipeline for inference, allowing you to run inference on either spectrogram or binary raw frequency data using the following code. When inferring on binary raw frequency data, the results will automatically be packaged into a video with the identification results displayed on the spectrogram. Note that if you want to infer on a binary raw frequency data pack, you must specify a model weight trained on the spectrogram dataset provided by the Python pipeline.
|
| 146 |
-
|
| 147 |
-
from utils.trainer import Classify_Model
|
| 148 |
-
|
| 149 |
-
test = Classify_Model(cfg='Your configration file path',
|
| 150 |
-
weight_path='Your weights path')
|
| 151 |
-
|
| 152 |
-
test.inference(source='Your target data path',
|
| 153 |
-
save_path='Your target save path')
|
| 154 |
-
|
| 155 |
-
### 2.4 How to train a custom drone detection model and use it
|
| 156 |
-
|
| 157 |
-
We provide a custom training method for drone detection tasks. Currently supported models include [YOLOv5](https://github.com/ultralytics/yolov5), [Faster R-CNN](https://arxiv.org/abs/1506.01497) (coming soon), and [DETR](https://arxiv.org/abs/2005.12872) (coming soon).
|
| 158 |
-
You can train the YOLOv5 model for drone detection using the following code.
|
| 159 |
-
|
| 160 |
-
from utils.trainer import DetTrainer
|
| 161 |
-
|
| 162 |
-
model = DetTrainer(model_name='yolo')
|
| 163 |
-
model.train(save_dir='Your target save path')
|
| 164 |
-
|
| 165 |
-
We provide an inference pipeline that allows you to run your model on either spectrogram or binary raw frequency data using the following code. When inferring on binary raw frequency data, the results will automatically be packaged into a video with the identification results displayed on the spectrogram. Note that if you want to infer on a binary raw frequency data pack, you must specify a model weight trained on the spectrogram dataset provided by the Python pipeline.
|
| 166 |
-
|
| 167 |
-
from utils.trainer import Detection_Model
|
| 168 |
-
|
| 169 |
-
test = Detection_Model(model_name='yolo', weight_path='Your weights path')
|
| 170 |
-
test.yolov5_detect(source='Your target data path',
|
| 171 |
-
save_dir='Your target save path')
|
| 172 |
-
|
| 173 |
-
### 2.6 Classify the drone based on detection result
|
| 174 |
-
|
| 175 |
-
We provide a simple method to classify drones based on detection results. This involves a two-stage model: the first model detects the drone, and the second model classifies the detected drone signal. You can use the following code to process a raw data pack directly, and the results will be saved as a video.
|
| 176 |
-
|
| 177 |
-
from utils.trainer import TwoStagesDetector
|
| 178 |
-
|
| 179 |
-
cfg_path = '../example/two_stage/sample.json'
|
| 180 |
-
TwoStagesDetector(cfg=cfg_path)
|
| 181 |
-
|
| 182 |
-
Note that you should specify the configuration file in `.json` format. In the configuration file, you can customize the models used in both the detection and classification stages to achieve better performance.
|
| 183 |
-
|
| 184 |
-
### 2.6 Evaluate your model on the benchmark
|
| 185 |
-
|
| 186 |
-
Here’s a more polished version of your sentence:
|
| 187 |
-
|
| 188 |
-
You can evaluate your model on the benchmark using metrics such as mAP, Top-K Accuracy, F1 score, and the Confusion Matrix. The evaluation will be performed separately on the -20dB to 20dB dataset, and the final model performance will be reported across different signal-to-noise ratios.
|
| 189 |
-
|
| 190 |
-
from utils.trainer import Classify_Model
|
| 191 |
-
|
| 192 |
-
test = Classify_Model(cfg='Your configration file path',
|
| 193 |
-
weight_path='Your weights path')
|
| 194 |
-
|
| 195 |
-
test.benchmark()
|
| 196 |
-
|
| 197 |
-
### 2.7 Some useful tools to help you to process the dataset
|
| 198 |
-
|
| 199 |
-
You can directly access our raw data for processing as needed. We provide a MATLAB tool (`tools.rawdata_crop.m`) for segmenting the raw data. You can specify any segment of raw data to be split every 2 seconds. The segmented data packets are smaller and easier to process.
|
| 200 |
-
|
| 201 |
-
The benchmark includes drone images under various SNRs, while the training set only contains drone image data at its original SNR. Using this training set directly may result in poor model performance on the benchmark. To address this, we provide a data augmentation tool (`utils.preprocessor.data_augmentation`) to enhance the model’s accuracy.
|
| 202 |
-
|
| 203 |
-
from utils.preprocessor import data_augmentation
|
| 204 |
-
|
| 205 |
-
data_path = "Your dataset path"
|
| 206 |
-
output_path = "Your output path"
|
| 207 |
-
method = ['Aug_method1', 'Aug_method2', ...]
|
| 208 |
-
|
| 209 |
-
data_augmentation(dataset_path=data_path,
|
| 210 |
-
output_path=output_path,
|
| 211 |
-
methods=method)
|
| 212 |
-
|
| 213 |
-
## 3.Notice
|
| 214 |
-
### 3.1 Raw data parameter description
|
| 215 |
-
The dataset public available now is only a subset, which includes 37 drone raw data clip and image data used for our experiment.
|
| 216 |
-
The parameters of the USRP configured during data acquisition for each drone type, are documented in a corresponding (`.xml`) file.
|
| 217 |
-
|
| 218 |
-
In `.xml` file, `DeviceType` represents the acquisition device type, Drone represents the drone type,
|
| 219 |
-
`SerialNumber` represents the serial number of the drone data pack,
|
| 220 |
-
`DataType` represents the data type of raw data,
|
| 221 |
-
`ReferenceSNRLevel` represents the signal-to-noise ratio of the drone data pack,
|
| 222 |
-
`CenterFrequency` represents the center frequency of the drone data pack,
|
| 223 |
-
`SampleRate` represents the sampling rate of the drone data pack,
|
| 224 |
-
`IFBandwidth` represents the bandwidth of the drone data pack,
|
| 225 |
-
`ScaleFactor` represents the hardware power amplification scale used when collecting signals, in dB.
|
| 226 |
-
### 3.2 Dataset file Structure
|
| 227 |
-
Your dataset file structure should be organized as follows, if you are using the provided dataloader.
|
| 228 |
-
Dataset
|
| 229 |
-
├── train
|
| 230 |
-
│ ├── AVATA
|
| 231 |
-
│ │ └── imgs
|
| 232 |
-
│ └── MINI4
|
| 233 |
-
│ └── imgs
|
| 234 |
-
└── valid
|
| 235 |
-
├── AVATA
|
| 236 |
-
│ └── imgs
|
| 237 |
-
└── MINI4
|
| 238 |
-
└── imgs
|
| 239 |
-
## 4.Dataset download
|
| 240 |
-
Dataset now public available on [Kaggle]() and [Hugging Face](https://huggingface.co/datasets/kitofrank/RFUAV)
|
| 241 |
-
|
| 242 |
-
## Citation
|
| 243 |
-
|
| 244 |
-
@misc{shi2025rfuavbenchmarkdatasetunmanned,
|
| 245 |
-
title={RFUAV: A Benchmark Dataset for Unmanned Aerial Vehicle Detection and Identification},
|
| 246 |
-
author={Rui Shi and Xiaodong Yu and Shengming Wang and Yijia Zhang and Lu Xu and Peng Pan and Chunlai Ma},
|
| 247 |
-
year={2025},
|
| 248 |
-
eprint={2503.09033},
|
| 249 |
-
archivePrefix={arXiv},
|
| 250 |
-
primaryClass={cs.RO},
|
| 251 |
-
url={https://arxiv.org/abs/2503.09033},
|
| 252 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|