Datasets:
audio audioduration (s) 8 8 |
|---|
CB-Telemetry: A Cyberforest Bird Acoustic Monitoring Benchmark
CB-Telemetry is a license-aware benchmark for evaluating semantic fidelity in long-term bird acoustic monitoring. This Hugging Face dataset repository is the ML-facing distribution endpoint for double-blind review. The immutable v1 archive is preserved on Zenodo:
https://doi.org/10.5281/zenodo.19951359
Note for reviewers: The permanent DOI (10.5281/zenodo.19951359)
is currently reserved and will resolve upon camera-ready publication. During
the double-blind review phase, please access the frozen draft archive using
this anonymized secret link:
https://zenodo.org/records/19951359?preview=1&token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjgyMTMzMjc2LTQ1ZGYtNDM5MC1hZWQ0LTA0MDUwN2U5NTIzYyIsImRhdGEiOnt9LCJyYW5kb20iOiJjYjUxMzE1NDMwM2IxMDYzMjU3MTAzYzVhOTk0M2NjNCJ9.vFOFfG_HYwC1BDsesFQzwGOevxXbin5FaJ7y4nutnam61ch0HoQEdR4Jl8U4UnTDZA9ghw2ps2HOOfkdJx9z2g.
Two-Tiered Reproducibility
CB-Telemetry separates the reviewer path from the encoder-regeneration path.
Tier 1: Minimal Evaluation Path (Frozen Artifact). This is the intended
reviewer path for reproducing the released tables. It is CPU-only and uses the
frozen feature tables included in this repository. It requires only standard
Python plus numpy, pandas, and scikit-learn; it does not require
TensorFlow, PyTorch, GPU drivers, or pretrained model downloads.
Tier 2: Full Encoder Regeneration Path (Advanced). This path is for researchers who want to reconstruct the complete Cyberforest source-audio set or evaluate regenerated Perch, BirdNET, or custom encoder features. It may require external model runtimes such as TensorFlow/LiteRT, encoder-specific packages, and approximately 60.95 GB of local source-audio storage.
What Is Hosted Here
This repository contains the lightweight ML-ready benchmark package:
- normalized 2012-2024 expert listening records from J-STAGE Data;
- frozen v1 scored manifests with 805 expert-overlap recording rows;
- paired Default and Strict-Clean split metadata;
- frozen Default and Strict-Clean feature tables;
- released RVQ/PQ/OPQ bottleneck feature tables for metric recomputation;
- refreshed representation, retrieval, metadata-control, and bottleneck baseline outputs;
- validation, smoke-test, and frozen-feature metric recomputation scripts;
- Croissant metadata with core and Responsible AI fields;
- license and citation metadata;
- 24 representative reviewer audio clips covering 5 sites and S/C/D behavior codes.
The full 805-recording source-audio reconstruction is approximately 60.95 GB and is not duplicated in this Hugging Face repository. Following NeurIPS E&D guidance for datasets larger than 4GB, this repository includes representative audio clips for reviewer inspection and a manifest-driven downloader for reconstructing the complete source-audio set from Cyberforest URLs.
Reviewer Audio Viewer
The default Hugging Face viewer is configured for the reviewer audio sample in
audio_sample_wav/. These WAV files are HF-viewer-friendly transcodes of the
same 24 M4A reviewer clips retained in audio_sample/. The viewer split can be
loaded locally with:
from datasets import load_dataset
samples = load_dataset("audiofolder", data_dir="audio_sample_wav")
print(samples)
The root-level metadata.csv points to the same 24 clips with paths relative to
the repository root. The audio_sample_wav/metadata.csv file points to the WAV
clips relative to audio_sample_wav/, following the AudioFolder convention.
Tier 1 Minimal Evaluation Quickstart
After cloning or downloading this repository:
Linux / macOS:
python3 scripts/run_smoke_eval.py --root .
python3 scripts/validate_cb_telemetry.py --root . --write-report
Windows (PowerShell / CMD):
python scripts\run_smoke_eval.py --root .
python scripts\validate_cb_telemetry.py --root . --write-report
Install the broad compatibility dependencies and recompute the released retrieval and shortcut-control tables:
python3 -m pip install -r requirements.txt
python3 scripts/run_release_evaluation.py --root . --bootstrap-samples 2000
For the direct dependency versions used in the local paper-artifact smoke
check, use requirements-strict.txt instead of requirements.txt.
The recomputation uses the continuous feature tables and the released
source-frozen bottleneck feature tables under features/bottlenecks/.
For a fast installation check, use a small bootstrap count:
python3 scripts/run_release_evaluation.py --root . --output-dir evaluation_runs/quick_check --bootstrap-samples 5
Windows equivalent:
python scripts\run_release_evaluation.py --root . --output-dir "evaluation_runs\quick_check" --bootstrap-samples 5
Tier 2 Optional Audio Reconstruction
To reconstruct the complete Default source-audio set:
python3 scripts/download_audio_recordings.py --root . --jobs 4
To reconstruct only the Strict-Clean source-audio subset:
python3 scripts/download_audio_recordings.py --root . --subset strict-clean --jobs 4
Downloaded source audio is written to:
audio_recordings/<site>/<date>/<filename>
This path convention matches the relative file paths used in the released feature tables, allowing users to regenerate features with alternative encoders and align them back to the benchmark manifests, splits, and labels.
Regenerating Perch, BirdNET, or custom encoder embeddings from downloaded audio is outside the minimal reviewer path. Those workflows depend on the selected encoder runtime and may involve TensorFlow/LiteRT, model downloads, or platform-dependent audio tooling.
Zenodo Archive
The exact frozen v1 archive uploaded to Zenodo is also included under
archive/:
571e58758e174801ab1b3fe88df19528c967cf0b03b453facc51efa51a50c373 cb_telemetry_v1_expert_aligned.tar.gz
The Zenodo DOI for the immutable v1 artifact is:
10.5281/zenodo.19951359
Provenance and Licensing
The combined CB-Telemetry package is distributed under CC BY-NC-SA 4.0.
- Expert listening records are derived from the J-STAGE Data source dataset by
Mutsuyuki UETA, Reiko KUROSAWA, and Kaoru SAITO (2024), DOI:
10.57368/data.birdresearch.26499895, distributed under CC BY 4.0. - The J-STAGE source dataset is associated with the original long-term
Cyberforest bird monitoring article, DOI:
10.11211/birdresearch.20.A55. - Cyberforest acoustic data and audio-derived artifacts are handled under CC BY-NC-SA 4.0 in this combined package.
- This repository does not relicense the underlying Cyberforest audio as CC BY 4.0 and does not support unrestricted commercial redistribution of Cyberforest audio.
See LICENSE, LICENSE_MATRIX.md, CITATION.cff, and croissant.json for
the machine-readable and component-level release metadata.
Scope and Limitations
CB-Telemetry v1 is intended for semantic retention evaluation, source-frozen classical bottleneck comparison, target retrieval diagnostics, archive-scope analysis, shortcut-control auditing, and artifact reproducibility review.
CB-Telemetry v1 is not intended for frame-level bird-event detection, all-day or all-season scored benchmark claims, full-year ecological trend inference from the scored subset, unrestricted commercial redistribution of Cyberforest audio, or claims that compressed features can replace source audio for all scientific uses.
J-STAGE records are expert listening records, not dense second-level onset or offset annotations. The scored snapshot favors construct validity over broad ecological coverage and is bounded to the released dawn-window, breeding-season-centered protocol.
Ethical and Societal Considerations
CB-Telemetry uses passive recordings of non-human wildlife and does not require animal handling, intervention, or playback. It is not a human-subject dataset, but live soundscape recordings can incidentally include non-target sounds such as human speech. The repository therefore emphasizes compact feature tables, representative reviewer clips, manifest-driven source-audio access, and local ethical requirements for audio reuse.
Positive impacts include more reproducible AI evaluation for biodiversity monitoring and better-informed conservation decision support. Negative impacts could arise if weak semantic readout or retrieval scores were over-interpreted as dense detections or population-scale trends, leading to flawed ecological assessments or misallocated conservation resources. CB-Telemetry mitigates this through claim-profile reporting, paired tracks, metadata controls, and explicit out-of-scope uses.
Platform and Hardware Notes
- Tier 1 frozen-feature evaluation is designed to be platform-independent on Linux, Windows, and macOS with Python 3.9+.
- The metric recomputation path is CPU-only and should run comfortably below 1 GB of RAM for the released 805-row artifact.
- When running on Windows, quote paths that contain spaces. The scripts write
reports under
qa/and metric outputs underevaluation_runs/; run them from a directory where you have write permission. - Feature extraction and encoder regeneration are outside the minimal reviewer path. Those advanced workflows can involve external dependencies such as TensorFlow/LiteRT for Perch or BirdNET, encoder-specific model downloads, and platform-dependent audio tooling.
Repository Layout
annotations/ normalized J-STAGE expert listening records
audio_sample/ 24 short M4A clips plus AudioFolder metadata
baselines/ refreshed public baseline outputs
features/ frozen Default and Strict-Clean feature tables
manifests/ scored snapshot, split, audio, and sample manifests
qa/ validation and build summaries
scripts/ smoke, validation, and source-audio download scripts
archive/ exact Zenodo tarball and SHA256 sidecar
croissant.json Croissant metadata with core and RAI fields
metadata.csv root-relative AudioFolder metadata for reviewer clips
- Downloads last month
- 121