Update README.md
Browse files
README.md
CHANGED
|
@@ -44,22 +44,6 @@ tags:
|
|
| 44 |
- Cries for help
|
| 45 |
- Normal indoor sounds
|
| 46 |
|
| 47 |
-
- Feature Extraction Process
|
| 48 |
-
1. Audio Collection:
|
| 49 |
-
- Audio samples were sourced from datasets, such as AI Hub, to ensure coverage of diverse scenarios.
|
| 50 |
-
- [AI Hub 위급상황 음성/음향](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=170)
|
| 51 |
-
- These include emergency and non-emergency sounds to train the model for accurate classification.
|
| 52 |
-
2. MFCC Extraction:
|
| 53 |
-
- The raw audio signals were processed to extract Mel-Frequency Cepstral Coefficients (MFCC).
|
| 54 |
-
- The MFCC features effectively capture the frequency characteristics of the audio, making them suitable for sound classification tasks.
|
| 55 |
-

|
| 56 |
-
3. Output Format:
|
| 57 |
-
- The extracted MFCC features are saved as `13 x n` numpy arrays, where:
|
| 58 |
-
- 13: Represents the number of MFCC coefficients (features).
|
| 59 |
-
- n: Corresponds to the number of frames in the audio segment.
|
| 60 |
-
4. Saved Dataset:
|
| 61 |
-
- The processed `13 x n` MFCC arrays are stored as `.npy` files, which serve as the direct input to the model.
|
| 62 |
-
|
| 63 |
### Model Description
|
| 64 |
|
| 65 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
|
|
|
| 44 |
- Cries for help
|
| 45 |
- Normal indoor sounds
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
### Model Description
|
| 48 |
|
| 49 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|