File size: 4,290 Bytes
d96ab35
 
bcca624
d96ab35
 
bcca624
 
d96ab35
bcca624
 
 
 
 
d96ab35
 
bcca624
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d96ab35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
language:
- en
license: other
task_categories:
- audio-classification
- text-to-audio
tags:
- audio-retrieval
- audio-captioning
- DCASE
- CLAP
- contrastive-learning
pretty_name: Clotho Development Subset
size_categories:
- 1K<n<10K
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: file_name
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 44100
  - name: caption_1
    dtype: string
  - name: caption_2
    dtype: string
  - name: caption_3
    dtype: string
  - name: caption_4
    dtype: string
  - name: caption_5
    dtype: string
  - name: keywords
    dtype: string
  - name: sound_id
    dtype: string
  - name: sound_link
    dtype: string
  - name: start_end_samples
    dtype: string
  - name: manufacturer
    dtype: string
  - name: license
    dtype: string
  splits:
  - name: train
    num_bytes: 1817512354
    num_examples: 907
  - name: test
    num_bytes: 452928261
    num_examples: 227
  download_size: 2184813586
  dataset_size: 2270440615
---

# Clotho Development Subset

A sampled subset (~3GB) of the [Clotho v2.1](https://zenodo.org/record/3490684) development split, packaged for quick experimentation with audio-text retrieval pipelines.

## 📋 Dataset Description

This dataset is a convenience subset of the **Clotho** audio captioning dataset, created for rapid prototyping and testing of audio-text retrieval models (e.g., CLAP fine-tuning) on limited compute.

- **Source**: Clotho v2.1 (development split)
- **Original Authors**: K. Drossos, S. Lipping, T. Virtanen
- **Original Paper**: [Clotho: An Audio Captioning Dataset](https://arxiv.org/abs/1910.09387)

## 📊 Dataset Structure

### Splits

| Split | Samples | Description |
|-------|---------|-------------|
| train | ~1,300  | Training set (80%) |
| test  | ~330    | Test set (20%) |

### Features

| Column | Type | Description |
|--------|------|-------------|
| `file_name` | `string` | Original filename from Clotho |
| `audio` | `Audio` | Audio waveform, 44.1kHz |
| `caption_1` | `string` | Human-written caption #1 |
| `caption_2` | `string` | Human-written caption #2 |
| `caption_3` | `string` | Human-written caption #3 |
| `caption_4` | `string` | Human-written caption #4 |
| `caption_5` | `string` | Human-written caption #5 |

### Audio Details

- **Duration**: 15–30 seconds per clip
- **Sample Rate**: 44,100 Hz
- **Channels**: Mono
- **Format**: WAV (stored as Parquet/Arrow on Hub)

## 🚀 Usage

```python
from datasets import load_dataset

ds = load_dataset("your-username/clotho-dev-sample")

# Access a sample
sample = ds["train"][0]
print(sample["caption_1"])   # "A dog barks in the distance"
print(sample["audio"])       # {'array': array([...]), 'sampling_rate': 44100}
```

### With CLAP

```python
from transformers import ClapProcessor, ClapModel

processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")

sample = ds["train"][0]
inputs = processor(
    audios=sample["audio"]["array"],
    sampling_rate=sample["audio"]["sampling_rate"],
    text=sample["caption_1"],
    return_tensors="pt",
    padding=True,
)
outputs = model(**inputs)
```

## ⚠️ Important Notes

- This is a **subset** (~43%) of the full Clotho development split, sampled randomly with `seed=42`
- For official benchmarking, use the full Clotho dataset from [Zenodo](https://zenodo.org/record/3490684)
- This subset is intended for **pipeline testing and prototyping only**

## 📄 Citation

If you use this dataset, please cite the original Clotho paper:

```bibtex
@inproceedings{drossos2020clotho,
  title={Clotho: An Audio Captioning Dataset},
  author={Drossos, Konstantinos and Lipping, Samuel and Virtanen, Tuomas},
  booktitle={ICASSP 2020 - IEEE International Conference on Acoustics, Speech and Signal Processing},
  pages={736--740},
  year={2020},
  organization={IEEE}
}
```

## 🏷️ License

This dataset follows the original Clotho license. Audio clips are sourced from [Freesound](https://freesound.org/) under Creative Commons licenses. Please refer to the [original dataset](https://zenodo.org/record/3490684) for full license details.