admin commited on
Commit
e09d2f4
·
1 Parent(s): 6d8fd18

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -5
README.md CHANGED
@@ -14,6 +14,9 @@ tags:
14
 
15
  The Classical and Ethnic Vocal Style Classification model aims to distinguish between classical and ethnic vocal styles, with all audio samples sung by professional vocalists. The model is fine-tuned using an audio dataset consisting of four categories, which has been pre-processed into spectrograms. Initially pretrained in the computer vision (CV) domain, the backbone network undergoes a fine-tuning process specifically designed for vocal style classification tasks. In this model, the pre-training on CV tasks provides a foundation for the network to learn general audio features, which are then adjusted during fine-tuning to adapt to the subtle differences between classical and ethnic vocal styles. The audio dataset, comprising samples from classical and various ethnic singing traditions, enables the model to capture unique patterns associated with each vocal style. Representing spectrograms as input allows the model to effectively analyze both the temporal and frequency components of the audio signals. Through the fine-tuning process, the model continuously enhances its ability to discriminate between sound representations and subtle stylistic differences between classical and ethnic styles. This specialized model holds significant potential in the music industry and cultural preservation, as it accurately categorizes vocal performances into these two broad categories. Its foundation in pre-trained computer vision principles demonstrates the versatility and adaptability of neural networks across different domains, enhancing the model's capability to capture complex features of vocal performances.
16
 
 
 
 
17
  ## Usage
18
  ```python
19
  from modelscope import snapshot_download
@@ -40,20 +43,36 @@ A demo result of SqueezeNet fine-tuning:
40
  <table id="pianos">
41
  <tr>
42
  <th>Loss curve</th>
43
- <td><img src="./loss.jpg"></td>
44
  </tr>
45
  <tr>
46
  <th>Training and validation accuracy</th>
47
- <td><img src="./acc.jpg"></td>
48
  </tr>
49
  <tr>
50
  <th>Confusion matrix</th>
51
- <td><img src="./mat.jpg"></td>
52
  </tr>
53
  </table>
54
 
 
 
 
55
  ## Mirror
56
  <https://www.modelscope.cn/models/ccmusic-database/bel_canto>
57
 
58
- ## Reference
59
- [1] <https://github.com/monetjoe/ccmusic_eval>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  The Classical and Ethnic Vocal Style Classification model aims to distinguish between classical and ethnic vocal styles, with all audio samples sung by professional vocalists. The model is fine-tuned using an audio dataset consisting of four categories, which has been pre-processed into spectrograms. Initially pretrained in the computer vision (CV) domain, the backbone network undergoes a fine-tuning process specifically designed for vocal style classification tasks. In this model, the pre-training on CV tasks provides a foundation for the network to learn general audio features, which are then adjusted during fine-tuning to adapt to the subtle differences between classical and ethnic vocal styles. The audio dataset, comprising samples from classical and various ethnic singing traditions, enables the model to capture unique patterns associated with each vocal style. Representing spectrograms as input allows the model to effectively analyze both the temporal and frequency components of the audio signals. Through the fine-tuning process, the model continuously enhances its ability to discriminate between sound representations and subtle stylistic differences between classical and ethnic styles. This specialized model holds significant potential in the music industry and cultural preservation, as it accurately categorizes vocal performances into these two broad categories. Its foundation in pre-trained computer vision principles demonstrates the versatility and adaptability of neural networks across different domains, enhancing the model's capability to capture complex features of vocal performances.
16
 
17
+ ## Demo
18
+ <https://huggingface.co/spaces/ccmusic-database/bel-canto>
19
+
20
  ## Usage
21
  ```python
22
  from modelscope import snapshot_download
 
43
  <table id="pianos">
44
  <tr>
45
  <th>Loss curve</th>
46
+ <td><img src="https://www.modelscope.cn/api/v1/models/ccmusic-database/bel_canto/repo?Revision=master&FilePath=.%2Fsqueezenet1_1_cqt%2Floss.jpg&View=true"></td>
47
  </tr>
48
  <tr>
49
  <th>Training and validation accuracy</th>
50
+ <td><img src="https://www.modelscope.cn/api/v1/models/ccmusic-database/bel_canto/repo?Revision=master&FilePath=.%2Fsqueezenet1_1_cqt%2Facc.jpg&View=true"></td>
51
  </tr>
52
  <tr>
53
  <th>Confusion matrix</th>
54
+ <td><img src="https://www.modelscope.cn/api/v1/models/ccmusic-database/bel_canto/repo?Revision=master&FilePath=.%2Fsqueezenet1_1_cqt%2Fmat.jpg&View=true"></td>
55
  </tr>
56
  </table>
57
 
58
+ ## Dataset
59
+ <https://huggingface.co/datasets/ccmusic-database/bel_canto>
60
+
61
  ## Mirror
62
  <https://www.modelscope.cn/models/ccmusic-database/bel_canto>
63
 
64
+ ## Evaluation
65
+ <https://github.com/monetjoe/ccmusic_eval>
66
+
67
+ ## Cite
68
+ ```bibtex
69
+ @dataset{zhaorui_liu_2021_5676893,
70
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
71
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
72
+ month = {mar},
73
+ year = {2024},
74
+ publisher = {HuggingFace},
75
+ version = {1.2},
76
+ url = {https://huggingface.co/ccmusic-database}
77
+ }
78
+ ```