admin
commited on
Commit
·
a51d29e
1
Parent(s):
adca2dd
upd md
Browse files
README.md
CHANGED
|
@@ -33,30 +33,6 @@ for item in ds:
|
|
| 33 |
print(item)
|
| 34 |
```
|
| 35 |
|
| 36 |
-
## Data source of Rough4Q
|
| 37 |
-
| Dataset | Size | Chord | Year | Paper |
|
| 38 |
-
| :--------------------------------------------------------------------------------------------------------------------------: | -----: | :---: | :---: | :------------------------------------------------------------------------------------------------------------- |
|
| 39 |
-
| [Midi-Wav Bi-directional Pop](https://ccmusic-database.github.io/database/cpop.html) | 111 | × | 2021 | [Music Data Sharing Platform for Academic Research (CCMusic)](https://zenodo.org/records/5654924) |
|
| 40 |
-
| [JSBach Chorales](https://dspace.mit.edu/bitstream/handle/1721.1/84963/Cuthbert_Ariza_ISMIR_2010.pdf?sequence=1&isAllowed=y) | 366 | √ | 2010 | [Chord-Conditioned Melody Harmonization With Controllable Harmonicity](https://arxiv.org/pdf/2202.08423) |
|
| 41 |
-
| [Nottingham](https://ifdo.ca/~seymour/nottingham/nottingham.html) | 1015 | √ | 2011 | Nottingham Database |
|
| 42 |
-
| [Wikifonia](http://www.synthzone.com/files/Wikifonia/) | 6394 | √ | 2018 | [Enhanced Wikifonia Leadsheet Dataset](https://zenodo.org/records/1476555) |
|
| 43 |
-
| [Essen](https://ifdo.ca/~seymour/runabc/esac/esacdatabase.html) | 10369 | × | 2013 | Essen Folk Song Database |
|
| 44 |
-
| [IrishMAN](https://huggingface.co/datasets/sander-wood/irishman) | 216281 | × | 2023 | [TunesFormer: Forming Irish Tunes with Control Codes by Bar Patching](https://ceur-ws.org/Vol-3528/paper1.pdf) |
|
| 45 |
-
|
| 46 |
-
## Statistics
|
| 47 |
-
<style>
|
| 48 |
-
td {
|
| 49 |
-
vertical-align: middle !important;
|
| 50 |
-
}
|
| 51 |
-
</style>
|
| 52 |
-
|
| 53 |
-
| Dataset | Pie chart | Total | Train | Test |
|
| 54 |
-
| :------: | :-----------------------------------------------------------------------------------------: | -----: | -----: | ----: |
|
| 55 |
-
| Analysis |  | 1278 | 1278 | - |
|
| 56 |
-
| VGMIDI |  | 9315 | 8383 | 932 |
|
| 57 |
-
| EMOPIA |  | 21480 | 19332 | 2148 |
|
| 58 |
-
| Rough4Q |  | 520673 | 468605 | 52068 |
|
| 59 |
-
|
| 60 |
## Analysis
|
| 61 |
### Statistical values
|
| 62 |
| Feature | Min | Max | Range | Median | Mean |
|
|
@@ -88,6 +64,12 @@ for item in ds:
|
|
| 88 |
| arousal - volume | +0.3800 | positive | 3.558e-45 | p<0.05 significant |
|
| 89 |
|
| 90 |
### Feature distribution
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
| Feature | Distribution chart |
|
| 92 |
| :-------: | :------------------------------------------------------------------------------------------: |
|
| 93 |
| key |  |
|
|
@@ -99,6 +81,41 @@ for item in ds:
|
|
| 99 |
| mode |  |
|
| 100 |
| direction |  |
|
| 101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
## Mirror
|
| 103 |
The data processor is also included in <https://www.modelscope.cn/datasets/monetjoe/EMusicGen>
|
| 104 |
|
|
|
|
| 33 |
print(item)
|
| 34 |
```
|
| 35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
## Analysis
|
| 37 |
### Statistical values
|
| 38 |
| Feature | Min | Max | Range | Median | Mean |
|
|
|
|
| 64 |
| arousal - volume | +0.3800 | positive | 3.558e-45 | p<0.05 significant |
|
| 65 |
|
| 66 |
### Feature distribution
|
| 67 |
+
<style>
|
| 68 |
+
td {
|
| 69 |
+
vertical-align: middle !important;
|
| 70 |
+
}
|
| 71 |
+
</style>
|
| 72 |
+
|
| 73 |
| Feature | Distribution chart |
|
| 74 |
| :-------: | :------------------------------------------------------------------------------------------: |
|
| 75 |
| key |  |
|
|
|
|
| 81 |
| mode |  |
|
| 82 |
| direction |  |
|
| 83 |
|
| 84 |
+
## Processed EMOPIA & VGMIDI
|
| 85 |
+
The processed EMOPIA and processed VGMIDI datasets will be used to evaluate the error-free rate of music scores generated by fine-tuning the backbone with existing emotion-labeled datasets. Therefore, it is essential to ensure that the processed data is compatible with the input format required by the pre-trained backbone.
|
| 86 |
+
|
| 87 |
+
We found that the average number of measures in the dataset used for pre-training backbone is approximately 20, and the maximum number of measures supported by the pre-trained backbone input is 32. Consequently, we converted the original EMOPIA and VGMIDI data into XML scores filtering out erroneous items and segmented them into chunks of 20 measures each. Each chunk was appended with an ending marker to prevent the model from generating endlessly in cases of repetitive melodies without seeing a terminating mark. For the ending segments of the scores, if a segment exceeded 10 measures, it was further divided; otherwise, it was combined with the previous segment. This approach ensures that the resulting score slices do not exceed 30 measures, thereby guaranteeing that all slices are within the maximum measure limit supported by backbone, with an average of approximately 20 measures.
|
| 88 |
+
|
| 89 |
+
It is noted that when converting MIDI to XML using current tools, repeat sections cannot be folded back. In fact, after converting the dataset used for pre-training backbone into MIDI and expanding all repeat sections, the average number of measures was approximately 35. However, due to the maximum measure limit supported during pre-training, repeat markers were not expanded at that stage, and since repeat markers themselves occupy only two characters, we could not use 35 measures as the slicing unit even for MIDI data.
|
| 90 |
+
|
| 91 |
+
Subsequently, we converted the segmented XML slices into ABC notation format, performed data augmentation by transposing to 15 keys, and extracted the melodic lines and control codes to produce the final processed EMOPIA and processed VGMIDI datasets. Both datasets have a consistent structure comprising three columns: the first column is the control code, the second column is ABC chars, and the third column contains the 4Q emotion labels inherited from the original dataset. The total number of samples is 21,480 for processed EMOPIA and 9,315 for processed VGMIDI, which were split into training and test sets at a 10:1 ratio. There is almost no correlation between emotion and key. Therefore, the data augmentation by transposing to 15 keys is unlikely to significantly impact the label distribution.
|
| 92 |
+
|
| 93 |
+
## Data source of Rough4Q
|
| 94 |
+
The Rough4Q dataset is a large-scale dataset created by automatically annotating a substantial amount of well-structured sheet music based on conclusions from correlation statistics. The data sources for this dataset, include both scores in XML series (XML / MXL / MusicXML) and ABC notation format scores. It is noted that not all datasets within the data source include chord markings. Since this paper focuses solely on melody generation, the absence of chord information is not a significant concern for the current study. After filtering out erroneous or duplicated scores and consolidating these into a unified XML format, we utilized music21 to rapidly extract features. Due to the high volume of data, we chose a few representative and computationally manageable features for approximate emotional annotation.
|
| 95 |
+
|
| 96 |
+
| Dataset | Size | Chord | Year | Paper |
|
| 97 |
+
| :--------------------------------------------------------------------------------------------------------------------------: | -----: | :---: | :---: | :------------------------------------------------------------------------------------------------------------- |
|
| 98 |
+
| [Midi-Wav Bi-directional Pop](https://ccmusic-database.github.io/database/cpop.html) | 111 | × | 2021 | [Music Data Sharing Platform for Academic Research (CCMusic)](https://zenodo.org/records/5654924) |
|
| 99 |
+
| [JSBach Chorales](https://dspace.mit.edu/bitstream/handle/1721.1/84963/Cuthbert_Ariza_ISMIR_2010.pdf?sequence=1&isAllowed=y) | 366 | √ | 2010 | [Chord-Conditioned Melody Harmonization With Controllable Harmonicity](https://arxiv.org/pdf/2202.08423) |
|
| 100 |
+
| [Nottingham](https://ifdo.ca/~seymour/nottingham/nottingham.html) | 1015 | √ | 2011 | Nottingham Database |
|
| 101 |
+
| [Wikifonia](http://www.synthzone.com/files/Wikifonia/) | 6394 | √ | 2018 | [Enhanced Wikifonia Leadsheet Dataset](https://zenodo.org/records/1476555) |
|
| 102 |
+
| [Essen](https://ifdo.ca/~seymour/runabc/esac/esacdatabase.html) | 10369 | × | 2013 | Essen Folk Song Database |
|
| 103 |
+
| [IrishMAN](https://huggingface.co/datasets/sander-wood/irishman) | 216281 | × | 2023 | [TunesFormer: Forming Irish Tunes with Control Codes by Bar Patching](https://ceur-ws.org/Vol-3528/paper1.pdf) |
|
| 104 |
+
|
| 105 |
+
According to the correlation statistics, valence is significantly positively correlated only with mode. Therefore, mode was selected as the feature for determining the valence dimension, with minor mode classified as low valence and major mode as high valence. For arousal, it is significantly positively correlated with pitch range, pitch SD, and RMS. Given that RMS calculation requires audio rendering, which is impractical for large-scale automatic annotation, it was excluded. Among the features pitch range and pitch SD, the correlation between arousal and pitch SD is stronger. Moreover, pitch SD not only partially reflects pitch range but also indicates the intensity of musical variation, providing a richer set of information. Therefore, we tentatively select pitch SD as the benchmark for determining the arousal dimension, classifying scores below the median as low arousal and those above the median as high arousal. This approach yields a rough Russell 4Q label based on the V/A quadrant.
|
| 106 |
+
|
| 107 |
+
This rough labeling with noise primarily serves to record the state of mode and pitch SD as emotion-related embeddings, ensuring consistency with the format of the two processed datasets EMOPIA and VGMIDI. Following this, we applied the same data processing methods as those described for the two datasets, preserving labels while segmenting the scores. Notably, the IrishMAN was also the dataset used for backbone pre-training. But it discards scores longer than 32 measures, leading to a significant loss of data. In contrast, our segmentation approach preserves these longer scores.
|
| 108 |
+
|
| 109 |
+
We discovered that the data were highly imbalanced after processing, with the quantities of Q3 and Q4 labels differing by an order of magnitude from the other categories. To address this imbalance, we performed data augmentation by transposing Q3 and Q4 categories across 15 different keys only. As a result of these processes, we ultimately obtained the Rough4Q dataset, which now comprises approximately 521K samples in total and is split into training and test sets at a 10:1 ratio.
|
| 110 |
+
|
| 111 |
+
## Statistics
|
| 112 |
+
| Dataset | Pie chart | Total | Train | Test |
|
| 113 |
+
| :------: | :-----------------------------------------------------------------------------------------: | -----: | -----: | ----: |
|
| 114 |
+
| Analysis |  | 1278 | 1278 | - |
|
| 115 |
+
| VGMIDI |  | 9315 | 8383 | 932 |
|
| 116 |
+
| EMOPIA |  | 21480 | 19332 | 2148 |
|
| 117 |
+
| Rough4Q |  | 520673 | 468605 | 52068 |
|
| 118 |
+
|
| 119 |
## Mirror
|
| 120 |
The data processor is also included in <https://www.modelscope.cn/datasets/monetjoe/EMusicGen>
|
| 121 |
|