You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

RAPNIC Dataset (example)

Dataset Description

This is an example of the full dataset, yet to be published, with 10 audio examples for 72 speakers.

RAPNIC (Reconeixement Automàtic de la Parla No Intel·ligible en Català) is a Catalan speech corpus collected from individuals with speech disorders, specifically cerebral palsy and Down syndrome.

This dataset was collected to develop and improve automatic speech recognition (ASR) systems that are accessible to people with speech disorders who speak Catalan.

Dataset Statistics

  • Speakers: 100
  • Recordings: 1000
  • Total Duration: 1.33 hours
  • Sampling Rate: 16 kHz
  • Audio Format: WAV
  • Language: Catalan (multiple dialects)

Disorder Distribution

  • Síndrome de Down: 510 recordings
  • Paràlisi cerebral: 340 recordings
  • Sense resposta: 100 recordings
  • Altres trastorns de la parla: 40 recordings
  • NA: 10 recordings

Gender Distribution

  • Dona: 550 recordings
  • Home: 420 recordings
  • Sense resposta: 30 recordings

Dialect Distribution

  • Central (Barcelona, Tarragona): 650 recordings
  • Girona: 150 recordings
  • Nord-Occidental (Lleida, Tortosa): 180 recordings
  • NA: 20 recordings

Data Fields

  • audio: Audio file (WAV format, 16 kHz)
  • 'audio_id': Identifier for each audio
  • speaker_id: Unique identifier for each speaker (anonymized)
  • filename: Original filename of the recording
  • task_id: Task/prompt identifier
  • transcription: Text that was read/spoken
  • clean_transcription: Text that was read/spoken in lowercase and without punctuation (? are kept)
  • original_duration: Duration in seconds before preprocessing
  • trimmed_duration: Duration in seconds after preprocessing (2s cut from end)
  • category: Recording category (clean, duplicate, over_threshold)
  • reason: Additional category information

Data Collection

The data was collected using a web-based recording platform adapted from Google's Project Euphonia. Participants recorded themselves reading prompts displayed on the screen.

Preprocessing

  • Each recording has 2 seconds trimmed from the end to remove silence
  • Duplicate recordings (same speaker, same task) were identified and marked
  • Recordings over 10 seconds were flagged for review

Data Splits

This is a test upload with 10 samples per speaker. Only clean recordings (no duplicates or over-threshold recordings) are included.

Ethical Considerations

  • All participants provided informed consent
  • Data is anonymized (speaker IDs do not contain personally identifiable information)
  • The dataset complies with GDPR regulations
  • This dataset should be used to improve accessibility technology for people with speech disorders
  • Currently, we do not provide the speakers metadata to prevent re-identification

Citation

If you use this dataset, please cite:

[Citation information to be added]

Contact

For questions or access requests, please contact gr.clic@ub.edu

License

This dataset is released under the Creative Commons Attribution 4.0 International License (CC-BY-NC-SA-4.0).

Downloads last month
20