Datasets:

Modalities:
Audio
Image
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
admin commited on
Commit
c967a45
·
1 Parent(s): c647a6c
Files changed (2) hide show
  1. .gitignore +0 -1
  2. README.md +9 -26
.gitignore CHANGED
@@ -1,3 +1,2 @@
1
- rename.sh
2
  test.*
3
  *__pycache__*
 
 
1
  test.*
2
  *__pycache__*
README.md CHANGED
@@ -122,7 +122,7 @@ This dataset is created and used by [[1]](https://arxiv.org/pdf/2303.13272) for
122
  ## Integration
123
  In the original dataset, the labels were stored in a separate CSV file. This posed usability challenges, as researchers had to perform time-consuming operations on CSV parsing and label-audio alignment. After our integration, the data structure has been streamlined and optimized. It now contains three columns: audio sampled at 44,100 Hz, pre-processed mel spectrograms, and a dictionary. This dictionary contains onset, offset, technique numeric labels, and pitch. The number of data entries after integration remains 99, with a cumulative duration amounting to 151.08 minutes. The average audio duration is 91.56 seconds.
124
 
125
- We performed data processing and constructed the [default subset](#default-subset) of the current integrated version of the dataset, and the details of its data structure can be viewed through the [viewer](https://huggingface.co/datasets/ccmusic-database/Guzheng_Tech99/viewer). In light of the fact that the current dataset has been referenced and evaluated in a published article, we transcribe here the details of the dataset processing during the evaluation in the said article: each audio clip is a 3-second segment sampled at 44,100Hz, which is then converted into a log Constant-Q Transform (CQT) spectrogram. A CQT accompanied by a label constitutes a single data entry, forming the first and second columns, respectively. The CQT is a 3-dimensional array with dimensions of 88x258x1, representing the frequency-time structure of the audio. The label, on the other hand, is a 2-dimensional array with dimensions of 7x258, indicating the presence of seven distinct techniques across each time frame. Ultimately, given that the original dataset has already been divided into train, valid, and test sets, we have integrated the feature extraction method mentioned in this article's evaluation process into the API, thereby constructing the [eval subset](#eval-subset), which is not embodied in our paper.
126
 
127
  ## Statistics
128
  In this part, we present statistics at the label-level. The number of audio clips is equivalent to the count of either onset or offset occurrences. The duration of an audio clip is determined by calculating the offset time minus the onset time. At this level, the number of clips is 15,838, and the total duration is 162.69 minutes.
@@ -170,34 +170,17 @@ MIR, audio frame-level detection, Guzheng playing technique detection
170
  Chinese, English
171
 
172
  ## Usage
173
- ### Default Subset
174
  ```python
175
  from datasets import load_dataset
176
 
177
- ds = load_dataset("ccmusic-database/Guzheng_Tech99", name="default")
178
- for item in ds["train"]:
179
- print(item)
180
-
181
- for item in ds["validation"]:
182
- print(item)
183
-
184
- for item in ds["test"]:
185
- print(item)
186
- ```
187
-
188
- ### Eval Subset
189
- ```python
190
- from datasets import load_dataset
191
-
192
- ds = load_dataset("ccmusic-database/Guzheng_Tech99", name="eval")
193
- for item in ds["train"]:
194
- print(item)
195
-
196
- for item in ds["validation"]:
197
- print(item)
198
-
199
- for item in ds["test"]:
200
- print(item)
201
  ```
202
 
203
  ## Maintenance
 
122
  ## Integration
123
  In the original dataset, the labels were stored in a separate CSV file. This posed usability challenges, as researchers had to perform time-consuming operations on CSV parsing and label-audio alignment. After our integration, the data structure has been streamlined and optimized. It now contains three columns: audio sampled at 44,100 Hz, pre-processed mel spectrograms, and a dictionary. This dictionary contains onset, offset, technique numeric labels, and pitch. The number of data entries after integration remains 99, with a cumulative duration amounting to 151.08 minutes. The average audio duration is 91.56 seconds.
124
 
125
+ We performed data processing and constructed the [default subset](#usage) of the current integrated version of the dataset, and the details of its data structure can be viewed through the [viewer](https://huggingface.co/datasets/ccmusic-database/Guzheng_Tech99/viewer). In light of the fact that the current dataset has been referenced and evaluated in a published article, we transcribe here the details of the dataset processing during the evaluation in the said article: each audio clip is a 3-second segment sampled at 44,100Hz, which is then converted into a log Constant-Q Transform (CQT) spectrogram. A CQT accompanied by a label constitutes a single data entry, forming the first and second columns, respectively. The CQT is a 3-dimensional array with dimensions of 88x258x1, representing the frequency-time structure of the audio. The label, on the other hand, is a 2-dimensional array with dimensions of 7x258, indicating the presence of seven distinct techniques across each time frame. Ultimately, given that the original dataset has already been divided into train, valid, and test sets, we have integrated the feature extraction method mentioned in this article's evaluation process into the API, thereby constructing the [eval subset](#usage), which is not embodied in our paper.
126
 
127
  ## Statistics
128
  In this part, we present statistics at the label-level. The number of audio clips is equivalent to the count of either onset or offset occurrences. The duration of an audio clip is determined by calculating the offset time minus the onset time. At this level, the number of clips is 15,838, and the total duration is 162.69 minutes.
 
170
  Chinese, English
171
 
172
  ## Usage
 
173
  ```python
174
  from datasets import load_dataset
175
 
176
+ ds = load_dataset(
177
+ "ccmusic-database/Guzheng_Tech99",
178
+ name="default", # default / eval
179
+ split="train", # train / validation / test
180
+ cache_dir="./__pycache__",
181
+ )
182
+ for i in ds:
183
+ print(i)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
  ```
185
 
186
  ## Maintenance