Improve dataset card: Add robotics task category, abstract, and enhance formatting

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -1,11 +1,19 @@
1
- # LoFi
 
 
 
2
 
3
- The description is generated by Grok3.
 
 
 
 
 
 
 
4
 
5
  ## Dataset Description
6
 
7
- - **Repository:** [RS2002/LoFi: Official Repository for The Paper, LoFi: Vision-Aided Label Generator for Wi-Fi Localization and Tracing](https://github.com/RS2002/LoFi)
8
- - **Paper:** [LoFi: Vision-Aided Label Generator for Wi-Fi Localization and Tracking](https://arxiv.org/abs/2412.05074), IEEE Globecom GenAI NGN Workshop 2025
9
  - **Contact:** [zzhaock@connect.ust.hk](mailto:zzhaock@connect.ust.hk)
10
  - **Collectors:** Zijian Zhao, Tingwei Chen
11
  - **Organization:** AI-RAN Lab (hosted by Prof. Guangxu Zhu) in SRIBD, CUHK(SZ)
 
1
+ ---
2
+ task_categories:
3
+ - robotics
4
+ ---
5
 
6
+ # LoFi: Vision-Aided Label Generator for Wi-Fi Localization and Tracking
7
+
8
+ Paper: [LoFi: Vision-Aided Label Generator for Wi-Fi Localization and Tracking](https://arxiv.org/abs/2412.05074)
9
+ Code: [https://github.com/RS2002/LoFi](https://github.com/RS2002/LoFi)
10
+
11
+ ## Abstract
12
+
13
+ Data-driven Wi-Fi localization and tracking have shown great promise due to their lower reliance on specialized hardware compared to model-based methods. However, most existing data collection techniques provide only coarse-grained ground truth or a limited number of labeled points, significantly hindering the advancement of data-driven approaches. While systems like lidar can deliver precise ground truth, their high costs make them inaccessible to many users. To address these challenges, we propose LoFi, a vision-aided label generator for Wi-Fi localization and tracking. LoFi can generate ground truth position coordinates solely from 2D images, offering high precision, low cost, and ease of use. Utilizing our method, we have compiled a Wi-Fi tracking and localization dataset using the ESP32-S3 and a webcam. The code and dataset of this paper are available at this https URL .
14
 
15
  ## Dataset Description
16
 
 
 
17
  - **Contact:** [zzhaock@connect.ust.hk](mailto:zzhaock@connect.ust.hk)
18
  - **Collectors:** Zijian Zhao, Tingwei Chen
19
  - **Organization:** AI-RAN Lab (hosted by Prof. Guangxu Zhu) in SRIBD, CUHK(SZ)