File size: 9,013 Bytes
a3c7474
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
406de53
 
 
 
 
 
a3c7474
406de53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ac6309
 
 
 
 
 
 
406de53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: audio
    dtype: audio
  - name: text
    dtype: string
  - name: start
    dtype: float64
  - name: end
    dtype: float64
  - name: duration
    dtype: float64
  splits:
  - name: train
    num_bytes: 117998725522.312
    num_examples: 48214
  download_size: 118730395064
  dataset_size: 117998725522.312
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: other
language:
- en
task_categories:
- automatic-speech-recognition
pretty_name: DASS2019_NLP
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64266673c71d90951ea6a12d/S_nIHyojiRN0sNmrwY_6L.png" alt="Image" width="500"/>

# Dataset Card for DASS2019_NLP

<!-- Provide a quick summary of the dataset. -->

This dataset contains audio and transcript content from DASS2019, the manually transcribed version of the *Digital Archive of Southern Speech*.
It may be suitable for speech-related NLP processing, modelling, and fine-tuning tasks.


## Dataset Details

DASS (Kretzschmar et al. 2012) comprises dialectological interviews with 64 informants conducted between 1968 and 1983; it is a subset of the larger *Linguistic Atlas of the Gulf States* (LAGS, Pederson et al. 1986–1992). 
DASS2019 (Kretzschmar et al. 2019) is a manually transcribed and time-aligned version of DASS, produced in the years 2016–2019 in the context of an NSF grant.
This DASS2019_NLP dataset was created by [Steven Coats](https://cc.oulu.fi/~scoats) from the DASS2019 data hosted at the University of Georgia (https://www.lap.uga.edu/Projects/DASS2019).  

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

The dataset comprises 344.04 hours of transcribed speech and 3,084,208 word tokens.

To process DASS2019, the following steps were undertaken:

- The 408 XML transcript files for the recordings were parsed for speaker, speech turn, turn start and end times, trascript text, and the associated audio file.
- Consecutive turns were then iteratively combined into segments not exceeding 30 seconds by aggregating adjacent speaker turns.
- The resulting time boundaries were used to segment the audio recordings.
- This procedure resulted in 48,214 labeled audio segments with a mean duration of 25.69 seconds.
- All segments were extracted according to the parsed timestamps and resampled to 16 kHz.
- Audio segment files were resampled at 16,000 Hz. 
- DASS2019 annotation codes were removed from the transcript text:
  -  The annotation #, used to enclose overlapping speech, was removed
  -  The annotations {X} (unintelligible), {NS} (non-speech such as phone ringing or dog barking), {NW} (non-word, such as cough), and {C: comment} were removed, including any additional annotation within the corresponding brackets
  -  For the annotation {D}, indicating a doubtful transcription, according to the transcriber, the curly brackets and D: were removed, but not the doubtful transcription. Hence, "{D: tobacco shed}" was changed to "tobacco shed".  The code {B}, indicating that a beep had been inserted into the audio to mask personal information such as a name or address, was transformed to “[beep]”. Transcript turns that contained no content after this filtering were then removed. This resulted in 284,207 speech turns, with corresponding audio files.
 
- **Curated by:** Steven Coats
- **Funded by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language:** English, Southern American English
- **License:** ## This dataset is derived from materials provided by the Linguistic Atlas Project (LAP). Use, copying, and redistribution are permitted subject to the original LAP terms available at https://www.lap.uga.edu/Projects/DASS2019/readme_DASS2019.txt. No additional rights are granted by this repository.


### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** [[DASS2019]](https://www.lap.uga.edu/Projects/DASS2019)

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

```python
from datasets import load_dataset, DatasetDict
dataset = load_dataset("stcoats/DASS2019_NLP")

train_test_split = dataset.train_test_split(test_size=0.2, seed=42)
test_validation_split = train_test_split["test"].train_test_split(test_size=0.5, seed=42)

splits = DatasetDict({
    "train": train_test_split["train"],
    "test": test_validation_split["test"],
    "validation": test_validation_split["train"],
})
```
(... further tasks, such as training or fine-tuning a model)

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

Training and fine-tuning automatic speech recognition models


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

The dataset fields are "id", a unique identifier for the segment, "audio", the corresponding .wav file, "text", the transcribed speech in the segment, "start" and "end" times for the segment, and "duration" of the .wav file in seconds.

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

The dataset can be used to train and fine-tune models for ASR of legacy interview materials, including recordings of other Linguistic Atlas Project data.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

Interviews with informants in the US South, conducted from 1968–1983 in eight US states: Texas, Louisiana, Arkansas, Mississippi, Tennessee, Alabama, Georgia, and Florida. The
data was originally collected by fieldworkers in the context of the Linguistic Atlas of the Gulf States (Pederson et al. 1986–1992).

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

Interviews were recorded on magnetic audio tapes, which were digitized from 2005–2009 and processed from 2008–2001. Manual transcription was undertaken from 2016-2019 by undergraduate student workers at the University of Georgia, Athens, Georia, USA.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

See the information at https://www.lap.uga.edu/Projects/DASS2019. 

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

Personal information such as names and addresses were manually masked with beeps during the original digitization of the LAGS data conducted from 2007–2011. The transcripts in this dataset contain "[beep]" for these segments. 


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```latex
@misc{steven_coats_2026,
	author       = { Steven Coats },
	title        = { DASS2019_NLP },
	year         = 2026,
	url          = { https://huggingface.co/datasets/stcoats/DASS2019_NLP },
	doi          = { 10.57967/hf/7841 },
	publisher    = { Hugging Face }
}
```

**APA:**

Coats, Steven. (2026). DASS2019_NLP Dataset, Version 1.0. Hugging Face Hub. https://huggingface.co/datasets/stcoats/DASS2019_NLP. 

## See also

https://huggingface.co/stcoats/whisper-large-v3-DASS2019-ct2, a whisper-large-v3 model fine-tuned on this dataset.

## More Information

DASS2019 should be cited as:

Kretzschmar, William A. Jr., Margaret E. L. Renwick, Lisa M. Lipani, Michael L. Olsen, Rachel M. Olsen, Yuanming Shi, and Joseph A. Stanley. (2019) Transcriptions of the Digital Archive of Southern Speech. Linguistic Atlas Project, University of Georgia. http://www.lap.uga.edu/Projects/DASS2019/

DASS should be cited as:

Kretzschmar, William A. Jr., Paulina Bounds, Jacqueline Hettel, Steven Coats, Lee Pederson, Lisa Lena Opas-Hänninen, Ilkka Juuso, and Tapio Seppänen. (2012). Digital Archive of Southern Speech. LDC2012S03. Philadelphia: Linguistic Data Consortium. https://doi.org/10.35111/5bnt-r659

LAGS should be cited as:

Pederson, Lee, Susan L. McDaniel, and Carol M. Adams, eds. (1986–92). Linguistic Atlas of the Gulf States. 7 vols. Athens: University of Georgia Press.

## Dataset Card Author

Steven Coats

## Dataset Card Contact

@stcoats