Datasets:
Tasks:
Image Classification
Modalities:
Image
Languages:
English
Size:
10K<n<100K
ArXiv:
Libraries:
FiftyOne
Update README.md
Browse files
README.md
CHANGED
|
@@ -46,7 +46,7 @@ dataset_summary: '
|
|
| 46 |
|
| 47 |
# Note: other available arguments include ''max_samples'', etc
|
| 48 |
|
| 49 |
-
dataset = load_from_hub("
|
| 50 |
|
| 51 |
|
| 52 |
# Launch the App
|
|
@@ -58,15 +58,11 @@ dataset_summary: '
|
|
| 58 |
'
|
| 59 |
---
|
| 60 |
|
| 61 |
-
# Dataset Card for
|
|
|
|
| 62 |
|
| 63 |
-
<!-- Provide a quick summary of the dataset. -->
|
| 64 |
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 6904 samples.
|
| 70 |
|
| 71 |
## Installation
|
| 72 |
|
|
@@ -84,7 +80,7 @@ from fiftyone.utils.huggingface import load_from_hub
|
|
| 84 |
|
| 85 |
# Load the dataset
|
| 86 |
# Note: other available arguments include 'max_samples', etc
|
| 87 |
-
dataset = load_from_hub("
|
| 88 |
|
| 89 |
# Launch the App
|
| 90 |
session = fo.launch_app(dataset)
|
|
@@ -95,130 +91,137 @@ session = fo.launch_app(dataset)
|
|
| 95 |
|
| 96 |
### Dataset Description
|
| 97 |
|
| 98 |
-
|
| 99 |
-
|
| 100 |
|
|
|
|
| 101 |
|
| 102 |
-
- **Curated by:**
|
| 103 |
-
- **
|
| 104 |
-
- **
|
| 105 |
-
- **
|
| 106 |
-
- **License:** [More Information Needed]
|
| 107 |
|
| 108 |
-
### Dataset Sources
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
- **
|
| 113 |
-
- **
|
| 114 |
-
- **Demo [optional]:** [More Information Needed]
|
| 115 |
|
| 116 |
## Uses
|
| 117 |
|
| 118 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
| 119 |
-
|
| 120 |
### Direct Use
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
| 125 |
|
| 126 |
### Out-of-Scope Use
|
| 127 |
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
|
|
|
| 131 |
|
| 132 |
## Dataset Structure
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 137 |
|
| 138 |
## Dataset Creation
|
| 139 |
|
| 140 |
### Curation Rationale
|
| 141 |
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
[More Information Needed]
|
| 145 |
|
| 146 |
### Source Data
|
| 147 |
|
| 148 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 149 |
-
|
| 150 |
#### Data Collection and Processing
|
| 151 |
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
#### Who are the source data producers?
|
| 157 |
|
| 158 |
-
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
-
|
| 163 |
|
| 164 |
-
|
| 165 |
|
| 166 |
#### Annotation process
|
| 167 |
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
|
|
|
| 171 |
|
| 172 |
#### Who are the annotators?
|
| 173 |
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
[More Information Needed]
|
| 177 |
-
|
| 178 |
-
#### Personal and Sensitive Information
|
| 179 |
|
| 180 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 181 |
|
| 182 |
-
|
| 183 |
|
| 184 |
-
##
|
| 185 |
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
| 195 |
-
|
| 196 |
-
## Citation [optional]
|
| 197 |
-
|
| 198 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 199 |
-
|
| 200 |
-
**BibTeX:**
|
| 201 |
-
|
| 202 |
-
[More Information Needed]
|
| 203 |
-
|
| 204 |
-
**APA:**
|
| 205 |
-
|
| 206 |
-
[More Information Needed]
|
| 207 |
-
|
| 208 |
-
## Glossary [optional]
|
| 209 |
-
|
| 210 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 211 |
-
|
| 212 |
-
[More Information Needed]
|
| 213 |
-
|
| 214 |
-
## More Information [optional]
|
| 215 |
-
|
| 216 |
-
[More Information Needed]
|
| 217 |
-
|
| 218 |
-
## Dataset Card Authors [optional]
|
| 219 |
-
|
| 220 |
-
[More Information Needed]
|
| 221 |
|
| 222 |
-
##
|
| 223 |
|
| 224 |
-
|
|
|
|
| 46 |
|
| 47 |
# Note: other available arguments include ''max_samples'', etc
|
| 48 |
|
| 49 |
+
dataset = load_from_hub("Voxel51/NYC_Smells")
|
| 50 |
|
| 51 |
|
| 52 |
# Launch the App
|
|
|
|
| 58 |
'
|
| 59 |
---
|
| 60 |
|
| 61 |
+
# Dataset Card for New York Smells
|
| 62 |
+

|
| 63 |
|
|
|
|
| 64 |
|
| 65 |
+
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 20000 samples.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
## Installation
|
| 68 |
|
|
|
|
| 80 |
|
| 81 |
# Load the dataset
|
| 82 |
# Note: other available arguments include 'max_samples', etc
|
| 83 |
+
dataset = load_from_hub("Voxel51/NYC_Smells")
|
| 84 |
|
| 85 |
# Launch the App
|
| 86 |
session = fo.launch_app(dataset)
|
|
|
|
| 91 |
|
| 92 |
### Dataset Description
|
| 93 |
|
| 94 |
+
New York Smells is a large-scale multimodal dataset pairing visual imagery with electronic nose (e-nose) olfactory signals captured "in the wild" throughout New York City. The dataset enables cross-modal learning between smell and sight, addressing a critical gap in machine perception research where olfaction remains largely inaccessible to machines compared to vision, sound, and touch.
|
|
|
|
| 95 |
|
| 96 |
+
The dataset contains 7,000 smell-image pairs from 3,500 distinct objects across 60 recording sessions in diverse indoor and outdoor environments—approximately 70× more objects than existing olfactory datasets.
|
| 97 |
|
| 98 |
+
- **Curated by:** Ege Ozguroglu, Junbang Liang, Ruoshi Liu, Mia Chiquier, Michael DeTienne, Wesley Wei Qian, Alexandra Horowitz, Andrew Owens, Carl Vondrick
|
| 99 |
+
- **Institutions:** Columbia University, Cornell University, Osmo Labs
|
| 100 |
+
- **Funded by:** Not specified
|
| 101 |
+
- **License:** Not specified (check project website for updates)
|
|
|
|
| 102 |
|
| 103 |
+
### Dataset Sources
|
| 104 |
|
| 105 |
+
- **Repository:** https://smell.cs.columbia.edu/
|
| 106 |
+
- **Paper:** https://arxiv.org/abs/2511.20544
|
| 107 |
+
- **Data Download:** https://smell.cs.columbia.edu/static/smell-dataset.tar.gz (27 GB)
|
| 108 |
+
- **Hardware Rig Specs:** https://smell.cs.columbia.edu/static/hardware.zip
|
|
|
|
| 109 |
|
| 110 |
## Uses
|
| 111 |
|
|
|
|
|
|
|
| 112 |
### Direct Use
|
| 113 |
|
| 114 |
+
- **Cross-modal smell-to-image retrieval:** Given a query smell, retrieve matching images in embedding space
|
| 115 |
+
- **Scene/object/material recognition from smell alone:** Classify scenes, objects, and materials using only olfactory signals
|
| 116 |
+
- **Fine-grained olfactory discrimination:** Distinguish between similar objects (e.g., different grass species)
|
| 117 |
+
- **Olfactory representation learning:** Train general-purpose smell embeddings using visual supervision
|
| 118 |
+
- **Multimodal sensor fusion research:** Combine RGB, depth, and chemical sensor modalities
|
| 119 |
|
| 120 |
### Out-of-Scope Use
|
| 121 |
|
| 122 |
+
- Medical diagnosis or health-related smell detection (dataset not collected for clinical purposes)
|
| 123 |
+
- Hazardous material detection (not designed for safety-critical applications)
|
| 124 |
+
- Individual identification or tracking via smell
|
| 125 |
+
- Production deployment without additional validation on target domain
|
| 126 |
|
| 127 |
## Dataset Structure
|
| 128 |
|
| 129 |
+
### FiftyOne Dataset Organization
|
| 130 |
+
|
| 131 |
+
The dataset is loaded as a **grouped dataset** with three slices per sample:
|
| 132 |
+
|
| 133 |
+
| Slice | Description |
|
| 134 |
+
|-------|-------------|
|
| 135 |
+
| `rs_rgb` (default) | RealSense RGB image with depth heatmap overlay |
|
| 136 |
+
| `rgb` | iPhone RGB image |
|
| 137 |
+
| `olfaction` | Olfaction diff visualization (sample − baseline heatmap) |
|
| 138 |
+
|
| 139 |
+
### Sample Fields
|
| 140 |
+
|
| 141 |
+
| Field | Type | Description |
|
| 142 |
+
|-------|------|-------------|
|
| 143 |
+
| `clip_features` | VectorField (768,) | Pre-computed CLIP embeddings |
|
| 144 |
+
| `smellprint_vector` | VectorField (32,) | Normalized 32-channel smell signature |
|
| 145 |
+
| `olfaction_diff` | VectorField (32,) | Max-pooled olfaction diff (sample − baseline) |
|
| 146 |
+
| `baseline_max` | VectorField (32,) | Max-pooled baseline (ambient) readings |
|
| 147 |
+
| `sample_max` | VectorField (32,) | Max-pooled sample (object) readings |
|
| 148 |
+
| `baseline_raw` | ArrayField (~17, 32) | Raw olfactory baseline time series |
|
| 149 |
+
| `sample_raw` | ArrayField (~17, 32) | Raw olfactory object time series |
|
| 150 |
+
| `location` | Classification | Recording location (e.g., "CV Lab Lounge") |
|
| 151 |
+
| `object_class` | Classification | Object index (numeric ID, no human-readable mapping yet) |
|
| 152 |
+
| `timestamp` | DateTimeField | Session timestamp |
|
| 153 |
+
| `session_id` | StringField | Session identifier (e.g., "2025-04-12_16-59-24") |
|
| 154 |
+
| `global_id` | StringField | Unique sample identifier |
|
| 155 |
+
| `temperature` | FloatField | Ambient temperature (°C) |
|
| 156 |
+
| `humidity` | FloatField | Ambient humidity (%) |
|
| 157 |
+
| `pid` | StringField | VOC concentration (PID sensor reading) |
|
| 158 |
+
| `depth_heatmap` | Heatmap | 16-bit depth map overlay (rs_rgb slice only) |
|
| 159 |
+
|
| 160 |
+
### Pre-computed Visualizations (Brain Keys)
|
| 161 |
+
|
| 162 |
+
| Brain Key | Embeddings | Description |
|
| 163 |
+
|-----------|------------|-------------|
|
| 164 |
+
| `clip_viz` | `clip_features` | UMAP of visual CLIP embeddings |
|
| 165 |
+
| `smellprint_viz` | `smellprint_vector` | UMAP of pre-computed smell fingerprints |
|
| 166 |
+
| `olfaction_diff_viz` | `olfaction_diff` | UMAP of object's unique smell signature |
|
| 167 |
+
| `baseline_max_viz` | `baseline_max` | UMAP of ambient environment smell |
|
| 168 |
+
| `sample_max_viz` | `sample_max` | UMAP of object + environment smell |
|
| 169 |
|
| 170 |
## Dataset Creation
|
| 171 |
|
| 172 |
### Curation Rationale
|
| 173 |
|
| 174 |
+
While olfaction is central to how animals perceive the world, this sensory modality remains largely inaccessible to machines. A key bottleneck is the lack of diverse, multimodal olfactory training data collected in natural settings. Existing olfactory datasets are small and captured in controlled lab environments. New York Smells addresses this gap by providing large-scale, in-the-wild paired vision-olfaction data.
|
|
|
|
|
|
|
| 175 |
|
| 176 |
### Source Data
|
| 177 |
|
|
|
|
|
|
|
| 178 |
#### Data Collection and Processing
|
| 179 |
|
| 180 |
+
- **Sensor Hardware:** Cyranose 320 electronic nose with 32 polymer-composite chemoresistive sensors, mounted on a custom 3D-printed rig with an iPhone camera and Intel RealSense depth sensor
|
| 181 |
+
- **Collection Method:** Researchers walked through various NYC locations capturing synchronized images and e-nose readings of odorant objects
|
| 182 |
+
- **Locations:** Parks, gyms, dining halls, libraries, streets, and other indoor/outdoor environments across New York City
|
| 183 |
+
- **Sessions:** 60 recording sessions
|
| 184 |
+
- **Samples:** 7,000 smell-image pairs from 3,500 distinct objects
|
| 185 |
+
- **Additional Sensors:** RGB-D, temperature, humidity, volatile organic compound (VOC) concentration
|
| 186 |
|
| 187 |
+
#### Olfactory Signal Format
|
|
|
|
|
|
|
| 188 |
|
| 189 |
+
- **Baseline:** ~17 timesteps × 32 sensors — ambient air reading before approaching object
|
| 190 |
+
- **Sample:** ~17 timesteps × 32 sensors — reading while near object
|
| 191 |
+
- **Smellprint:** 32-element vector — pre-computed time-collapsed fingerprint
|
| 192 |
+
- **Values:** Normalized resistance ratios (ΔR/R₀)
|
| 193 |
|
| 194 |
+
#### Who are the source data producers?
|
| 195 |
|
| 196 |
+
Research team from Columbia University, Cornell University, and Osmo Labs collected all data in New York City.
|
| 197 |
|
| 198 |
+
### Annotations
|
| 199 |
|
| 200 |
#### Annotation process
|
| 201 |
|
| 202 |
+
- **Location labels:** Manually recorded during collection sessions
|
| 203 |
+
- **Object indices:** Assigned during collection (human-readable labels pending release)
|
| 204 |
+
- **Scene/object/material labels:** Generated via GPT-4o (release pending per authors)
|
| 205 |
+
- **CLIP features:** Pre-computed using CLIP model on RGB images
|
| 206 |
|
| 207 |
#### Who are the annotators?
|
| 208 |
|
| 209 |
+
The research team annotated location and object information. GPT-4o was used for scene/object/material labeling (pending release).
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
|
|
|
|
| 211 |
|
| 212 |
+
## Citation
|
| 213 |
|
| 214 |
+
### BibTeX
|
| 215 |
|
| 216 |
+
```bibtex
|
| 217 |
+
@article{ozguroglu2025smell,
|
| 218 |
+
title={New York Smells: A Large Multimodal Dataset for Olfaction},
|
| 219 |
+
author={Ozguroglu, Ege and Liang, Junbang and Liu, Ruoshi and Chiquier, Mia and DeTienne, Michael and Qian, Wesley Wei and Horowitz, Alexandra and Owens, Andrew and Vondrick, Carl},
|
| 220 |
+
journal={arXiv preprint arXiv:2511.20544},
|
| 221 |
+
year={2025}
|
| 222 |
+
}
|
| 223 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
|
| 225 |
+
### APA
|
| 226 |
|
| 227 |
+
Ozguroglu, E., Liang, J., Liu, R., Chiquier, M., DeTienne, M., Qian, W. W., Horowitz, A., Owens, A., & Vondrick, C. (2025). New York Smells: A Large Multimodal Dataset for Olfaction. *arXiv preprint arXiv:2511.20544*.
|