DASS2019_NLP / README.md
stcoats's picture
Update README.md
9ac6309 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: audio
      dtype: audio
    - name: text
      dtype: string
    - name: start
      dtype: float64
    - name: end
      dtype: float64
    - name: duration
      dtype: float64
  splits:
    - name: train
      num_bytes: 117998725522.312
      num_examples: 48214
  download_size: 118730395064
  dataset_size: 117998725522.312
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: other
language:
  - en
task_categories:
  - automatic-speech-recognition
pretty_name: DASS2019_NLP
Image

Dataset Card for DASS2019_NLP

This dataset contains audio and transcript content from DASS2019, the manually transcribed version of the Digital Archive of Southern Speech. It may be suitable for speech-related NLP processing, modelling, and fine-tuning tasks.

Dataset Details

DASS (Kretzschmar et al. 2012) comprises dialectological interviews with 64 informants conducted between 1968 and 1983; it is a subset of the larger Linguistic Atlas of the Gulf States (LAGS, Pederson et al. 1986–1992). DASS2019 (Kretzschmar et al. 2019) is a manually transcribed and time-aligned version of DASS, produced in the years 2016–2019 in the context of an NSF grant. This DASS2019_NLP dataset was created by Steven Coats from the DASS2019 data hosted at the University of Georgia (https://www.lap.uga.edu/Projects/DASS2019).

Dataset Description

The dataset comprises 344.04 hours of transcribed speech and 3,084,208 word tokens.

To process DASS2019, the following steps were undertaken:

  • The 408 XML transcript files for the recordings were parsed for speaker, speech turn, turn start and end times, trascript text, and the associated audio file.

  • Consecutive turns were then iteratively combined into segments not exceeding 30 seconds by aggregating adjacent speaker turns.

  • The resulting time boundaries were used to segment the audio recordings.

  • This procedure resulted in 48,214 labeled audio segments with a mean duration of 25.69 seconds.

  • All segments were extracted according to the parsed timestamps and resampled to 16 kHz.

  • Audio segment files were resampled at 16,000 Hz.

  • DASS2019 annotation codes were removed from the transcript text:

    • The annotation #, used to enclose overlapping speech, was removed
    • The annotations {X} (unintelligible), {NS} (non-speech such as phone ringing or dog barking), {NW} (non-word, such as cough), and {C: comment} were removed, including any additional annotation within the corresponding brackets
    • For the annotation {D}, indicating a doubtful transcription, according to the transcriber, the curly brackets and D: were removed, but not the doubtful transcription. Hence, "{D: tobacco shed}" was changed to "tobacco shed". The code {B}, indicating that a beep had been inserted into the audio to mask personal information such as a name or address, was transformed to “[beep]”. Transcript turns that contained no content after this filtering were then removed. This resulted in 284,207 speech turns, with corresponding audio files.
  • Curated by: Steven Coats

  • Funded by: [More Information Needed]

  • Shared by [optional]: [More Information Needed]

  • Language: English, Southern American English

  • License: ## This dataset is derived from materials provided by the Linguistic Atlas Project (LAP). Use, copying, and redistribution are permitted subject to the original LAP terms available at https://www.lap.uga.edu/Projects/DASS2019/readme_DASS2019.txt. No additional rights are granted by this repository.

Dataset Sources [optional]

Uses

from datasets import load_dataset, DatasetDict
dataset = load_dataset("stcoats/DASS2019_NLP")

train_test_split = dataset.train_test_split(test_size=0.2, seed=42)
test_validation_split = train_test_split["test"].train_test_split(test_size=0.5, seed=42)

splits = DatasetDict({
    "train": train_test_split["train"],
    "test": test_validation_split["test"],
    "validation": test_validation_split["train"],
})

(... further tasks, such as training or fine-tuning a model)

Direct Use

Training and fine-tuning automatic speech recognition models

Dataset Structure

The dataset fields are "id", a unique identifier for the segment, "audio", the corresponding .wav file, "text", the transcribed speech in the segment, "start" and "end" times for the segment, and "duration" of the .wav file in seconds.

Dataset Creation

Curation Rationale

The dataset can be used to train and fine-tune models for ASR of legacy interview materials, including recordings of other Linguistic Atlas Project data.

Source Data

Interviews with informants in the US South, conducted from 1968–1983 in eight US states: Texas, Louisiana, Arkansas, Mississippi, Tennessee, Alabama, Georgia, and Florida. The data was originally collected by fieldworkers in the context of the Linguistic Atlas of the Gulf States (Pederson et al. 1986–1992).

Data Collection and Processing

Interviews were recorded on magnetic audio tapes, which were digitized from 2005–2009 and processed from 2008–2001. Manual transcription was undertaken from 2016-2019 by undergraduate student workers at the University of Georgia, Athens, Georia, USA.

Who are the source data producers?

See the information at https://www.lap.uga.edu/Projects/DASS2019.

Personal and Sensitive Information

Personal information such as names and addresses were manually masked with beeps during the original digitization of the LAGS data conducted from 2007–2011. The transcripts in this dataset contain "[beep]" for these segments.

Citation

BibTeX:

@misc{steven_coats_2026,
    author       = { Steven Coats },
    title        = { DASS2019_NLP },
    year         = 2026,
    url          = { https://huggingface.co/datasets/stcoats/DASS2019_NLP },
    doi          = { 10.57967/hf/7841 },
    publisher    = { Hugging Face }
}

APA:

Coats, Steven. (2026). DASS2019_NLP Dataset, Version 1.0. Hugging Face Hub. https://huggingface.co/datasets/stcoats/DASS2019_NLP.

See also

https://huggingface.co/stcoats/whisper-large-v3-DASS2019-ct2, a whisper-large-v3 model fine-tuned on this dataset.

More Information

DASS2019 should be cited as:

Kretzschmar, William A. Jr., Margaret E. L. Renwick, Lisa M. Lipani, Michael L. Olsen, Rachel M. Olsen, Yuanming Shi, and Joseph A. Stanley. (2019) Transcriptions of the Digital Archive of Southern Speech. Linguistic Atlas Project, University of Georgia. http://www.lap.uga.edu/Projects/DASS2019/

DASS should be cited as:

Kretzschmar, William A. Jr., Paulina Bounds, Jacqueline Hettel, Steven Coats, Lee Pederson, Lisa Lena Opas-Hänninen, Ilkka Juuso, and Tapio Seppänen. (2012). Digital Archive of Southern Speech. LDC2012S03. Philadelphia: Linguistic Data Consortium. https://doi.org/10.35111/5bnt-r659

LAGS should be cited as:

Pederson, Lee, Susan L. McDaniel, and Carol M. Adams, eds. (1986–92). Linguistic Atlas of the Gulf States. 7 vols. Athens: University of Georgia Press.

Dataset Card Author

Steven Coats

Dataset Card Contact

@stcoats