Problem with splitting dataset for training for deepfake detection

#2
by mkarapka - opened

Hi,
I would like to download this subset of the AUDETER dataset, but I’m not sure how to properly split it, since all the audios are generated.
Would it make sense to download the mls section of AUDETER dataset using load_dataset, take the dev and test parts from the original mls corpus mentioned in the paper, then mix, label them manually to real/spoof and split it to tran, dev, test?

It feels like it would be much more convenient if a properly pre-split version of the dataset was available — for example with balanced real vs. fake audio and three labeled splits (train, dev, test) ready to download.

This comment has been hidden
mkarapka changed discussion status to closed

Sign up or log in to comment