yonghyunk1m commited on
Commit
15617aa
·
verified ·
1 Parent(s): 5211d5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -3
README.md CHANGED
@@ -1,3 +1,97 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ASPED: Audio-Based Pedestrian Detection Dataset Card
2
+
3
+ ```markdown
4
+ ---
5
+ license: cc-by-4.0
6
+ language:
7
+ - en
8
+ tags:
9
+ - audio
10
+ - video
11
+ - multimodal
12
+ - event-detection
13
+ - urban-sound
14
+ - pedestrian-detection
15
+ ---
16
+
17
+ ## Dataset Summary
18
+
19
+ The Audio Sensing for PEdestrian Detection (ASPED) v.b dataset is a comprehensive, 1,321-hour roadside collection of audio and video recordings designed for the task of pedestrian detection in the presence of vehicular noise. As urban sound emerges as a cost-effective and privacy-preserving alternative to vision-based or GPS-based monitoring, this dataset addresses the key challenge of detecting pedestrians in realistic, noisy urban environments.
20
+
21
+ The dataset was collected from multiple camera and recorder setups at a single location ("Fifth Street") on the Georgia Institute of Technology campus and contains recordings from 4 timely different sessions. Each recording includes 16 kHz mono audio synchronized with frame-level pedestrian annotations and 1 fps video thumbnails.
22
+
23
+ This dataset is released alongside ASPED v.a, which was captured in a vehicle-free environment, to facilitate cross-dataset evaluation and research into model generalization for acoustic event detection. The official Hugging Face repository for the dataset can be found at: [https://huggingface.co/datasets/urbanaudiosensing/ASPEDvb](https://huggingface.co/datasets/urbanaudiosensing/ASPEDvb).
24
+
25
+ ### Supported Tasks and Leaderboards
26
+
27
+ The dataset is primarily intended for **audio-based pedestrian detection**. It can also be used for related tasks, such as:
28
+
29
+ * Sound Event Detection in Noisy Environments
30
+ * Domain Adaptation for Acoustic Models
31
+ * Urban Soundscape Analysis
32
+
33
+ ## Dataset Structure
34
+
35
+ The dataset is organized by session, then by the specific physical setup location along Fifth Street (e.g., FifthSt_A, FifthSt_B). Each of these setup locations contains its own synchronized Audio, Labels, and Video data. A single setup location can contain audio from one or two recorders.
36
+
37
+ ```markdown
38
+ ASPEDvb/
39
+ └── Session\_07262023/
40
+ └── FifthSt\_A/
41
+ ├── Audio/
42
+ │ └── recorder1\_DR-05X-01/
43
+ │ ├── 0001.flac
44
+ │ └── ...
45
+ ├── Labels/
46
+ │ ├── 0001.csv
47
+ │ └── ...
48
+ └── Video/
49
+ ├── 0001.mp4
50
+ └── ...
51
+ ```
52
+
53
+ ### Data Fields
54
+
55
+ The label files (`.csv`) provide detailed, frame-level annotations for the presence of pedestrians.
56
+
57
+ * `timestamp`: The exact date and time of the frame.
58
+ * `frame`: The sequential frame number.
59
+ * `recorder[N]_[X]m`: A set of binary flags (0/1) indicating if at least one pedestrian is detected within a specific radius (e.g., 1m, 3m, 6m, 9m) of a given recorder.
60
+ * `view_recorder[N]_[X]m`: A set of binary flags (0/1) indicating if a pedestrian is visible within a specific radius of a given recorder.
61
+ * `busFrame`: A binary flag indicating that the frame was visually obstructed by a bus. These frames were discarded during the modeling phase of the original study due to unreliable visual labels.
62
+
63
+ ### Data Instances
64
+
65
+ A sample row from a label file:
66
+
67
+ ```markdown
68
+ timestamp,frame,recorder1\_1m,recorder1\_3m,recorder1\_6m,recorder1\_9m,view\_recorder1\_1m,view\_recorder1\_3m,view\_recorder1\_6m,view\_recorder1\_9m,busFrame
69
+ 2023-07-26 16:20:00,0,0,0,0,0,0,1,1,1,0
70
+ ```
71
+
72
+ ## Dataset Creation
73
+
74
+ **Curation Rationale:** Pedestrian volume data provides critical insights for urban planning, safety improvements, and accessibility assessments. While vision-based systems are common, they suffer from limitations like visual occlusions and raise significant privacy concerns. Audio-based sensing offers a promising alternative as microphones are affordable, energy-efficient, and less intrusive. This dataset was created to spur research in this area, specifically by providing data that captures the challenge of detecting pedestrian-related sounds in environments with significant *vehicular noise*.
75
+
76
+ **Data Source and Collection:** The data was collected by researchers at the Center for Urban Resilience and Analytics (CURA) and the Music Informatics Group at the Georgia Institute of Technology. The ASPED v.b dataset was recorded near a road with vehicular traffic on the Georgia Tech campus in Atlanta. Audio was recorded at 16 kHz and synchronized with video from 6 GoPro cameras capturing 1 fps recordings. While data was collected simultaneously, any recordings from devices that experienced technical issues were excluded from the final dataset to ensure quality.
77
+
78
+ ### Citation Information
79
+
80
+ If you use this dataset in your research, please cite the following paper:
81
+
82
+ ```bibtex
83
+ @inproceedings{kim2025audio,
84
+ author= "Kim, Yonghyun and Han, Chaeyeon and Sarode, Akash and Posner, Noah and Guhathakurta, Subhrajit and Lerch, Alexander",
85
+ title= "Audio-Based Pedestrian Detection in the Presence of Vehicular Noise",
86
+ booktitle = "Proceedings of the Detection and Classification of Acoustic Scenes and Events 2025 Workshop (DCASE2025)",
87
+ address = "Barcelona, Spain",
88
+ month = "October",
89
+ year = "2025"
90
+ }
91
+ ```
92
+
93
+ ### Licensing Information
94
+
95
+ **Creative Commons Attribution 4.0**
96
+
97
+ ```