Annotation quality is very low, not usable for training
title describes itself
I disagree with that characterization. GTSinger has already been widely used in both academia and industry for singing and music model training, so it is simply not accurate to call it "not usable for training." At least for the Chinese and English portions, the lyric annotations, score annotations, and MFA alignments have generally been regarded as reasonably reliable. For other languages, only the Korean and Italian subsets do not have the same level of fine-grained annotation, and we have never tried to hide that limitation.
More importantly, this dataset was not assembled casually. We involved dozens of university students in a careful manual verification and correction process, and a substantial amount of human effort went into improving the annotation quality.
If you believe there are serious annotation problems, please point them out with concrete evidence and specific metrics: which subset, which type of annotation, what error rate, and what reproducible impact on training results. Simply saying that the annotation quality is "very low" and "not usable for training" is not a serious or professional assessment.
To be precise, the MIDI does not align well with the audio. Audio itself is pretty good, but as a nobody-cares amateur singer, I manually listened to around 1% audio wav and their accompanying MIDI, it's just not right. Drag some of the audio and MIDI to any DAW and you will know what I am saying.
Perhaps I am expecting too high (ROFL).
Thanks for clarifying. I understand the issue better now.
You are right that the MIDI / MusicXML in GTSinger should not be interpreted as a perfectly time-aligned DAW arrangement. For singing voice, this is much harder than instrument MIDI transcription: singers often do not strictly follow the written score, and expressive singing naturally includes timing deviations, pitch bends, vibrato, slides, and phoneme-level duration changes. So a MIDI track derived from vocal performance will not always look like a cleanly arranged instrumental score when dragged into a DAW.
Our annotation strategy was therefore not to create frame-perfect MIDI that exactly follows every F0 fluctuation. Instead, we aimed to provide realistic music scores that are useful for SVS and singing-related tasks. In our pipeline, we first extracted F0 with RMVPE, then used ROSVOT to derive the initial MIDI / score representation. After that, music experts manually listened to the recordings and references, checked the generated scores, and adjusted tempo, clef, key, note pitch, note duration, and note type where needed.
At the time, ROSVOT was already a strong SOTA system for singing voice transcription. In the ROSVOT paper, it achieved COnPOff F1 of 77.4 / 77.0 on clean / noisy inputs, compared with 65.8 / 62.1 from the previous method, and also reached Pitch AOR around 97.0 and Melody RPA around 87.6. We used it because pure-vocal MIDI annotation at this scale is genuinely difficult, and this was the best practical pipeline available to us.
That said, I am sorry that the released annotations did not meet your expectation in some cases. We did put a lot of effort into the annotation and manual checking, but for a dataset of this scale and across multiple languages/singers/techniques, it is still possible that some files have imperfect note timing or score alignment.
If you can provide a few concrete examples, such as file paths, timestamps, and what kind of mismatch you observe, we would be happy to check them carefully and consider fixing or documenting those cases.