Datasets:
text
stringlengths 0
755
|
|---|
MANIFEST - TonalityPrint Voice Dataset v1.0
|
===========================================
|
This manifest provides a complete inventory of all files included in the TonalityPrint Voice Dataset v1.0.
|
DATASET OVERVIEW
|
----------------
|
Version: 1.0.0
|
Release Date: January 24, 2026
|
DOI: https://doi.org/10.5281/zenodo.17913895
|
License: CC BY-NC 4.0
|
Total Audio Files: 144 WAV files
|
Total JSON Files: 144 individual JSON annotations
|
Total CSV Files: 144 individual CSVs + 1 combined CSV (ALL_TONALITY_DATA_COMBINED.csv)
|
Documentation Files: 13 files (4 root + 9 in documentation folder)
|
Total Dataset Files: 446 files (144 audio + 144 JSON + 145 CSV/combined + 13 documentation)
|
Recording Format: 16-bit PCM WAV (uncompressed)
|
Recording Source: 48kHz, 32-bit float (Audacity) β Exported as 16-bit PCM
|
Sample Rate: 48,000 Hz (48kHz)
|
Bit Depth: 16-bit
|
Channels: Mono (1 channel)
|
Speaker: Single speaker (Ronda Polhill)
|
Language: English (American)
|
Duration Range: 3-6 seconds per utterance
|
Total Duration: ~11 minutes 5 seconds
|
Annotation Method: Expert practitioner (perceptual assessment)
|
Annotation Completeness: 100% (all files fully annotated)
|
Quality Control: ~18.05% of corpus re-recorded after proprietary heuristic audit
|
FILE STRUCTURE
|
--------------
|
TonalityPrint_v1/
|
β
|
βββ README.md [ML Dataset Card - Primary documentation]
|
βββ QUICK_START.txt [4-step quick start guide]
|
βββ LICENSE.txt [CC BY-NC 4.0 License - Full legal text]
|
βββ CITATION.cff [Machine-readable citation metadata]
|
β
|
βββ documentation/ [Technical reference documentation]
|
β βββ CODEBOOK.md [Variable definitions - All 23 CSV columns]
|
β βββ METHODOLOGY.md [Data collection & annotation procedures]
|
β βββ MANIFEST.txt [This file - Complete file inventory]
|
β βββ annotations.txt [Annotation guidelines and documentation]
|
β βββ continuous_indices.txt [Continuous intensity rating guidelines]
|
β βββ scripts.txt [Script documentation]
|
β βββ speaker_profile.txt [Speaker information and characteristics]
|
β βββ tech_specs.txt [Technical specifications]
|
β βββ transcripts.txt [Transcript documentation]
|
β
|
βββ audio/ [Audio recordings - 144 files]
|
β βββ TPV1_B1_UTT1_S_Att_SP-Ronda.wav
|
β βββ TPV1_B1_UTT1_S_Baseneutral_SP-Ronda.wav
|
β βββ TPV1_B1_UTT1_S_Cogen_SP-Ronda.wav
|
β βββ ... [141 more WAV files]
|
β βββ TPV1_B6_UTT18_S_Trus_SP-Ronda.wav
|
β
|
βββ annotations/ [Annotation data - 289 files total]
|
βββ json/ [Original JSON annotations - 144 files]
|
β βββ TPV1_B1_UTT1_S_Att_SP-Ronda.json
|
β βββ ... [143 more JSON files]
|
β βββ TPV1_B6_UTT18_S_Trus_SP-Ronda.json
|
β
|
βββ csv/ [CSV format annotations - 144 files]
|
β βββ TPV1_B1_UTT1_S_Att_SP-Ronda.csv
|
β βββ ... [143 more CSV files]
|
β βββ TPV1_B6_UTT18_S_Trus_SP-Ronda.csv
|
β
|
βββ ALL_TONALITY_DATA_COMBINED.csv [Combined dataset - All 144 rows in single file]
|
AUDIO FILES INVENTORY (144 total)
|
----------------------------------
|
Batch 1 (B1) - Utterances 1-3:
|
- TPV1_B1_UTT1_S_Att_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Baseneutral_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Cogen_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Emre_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Reci_affi_ambivalex_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Reci_affi_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Reci_SP-Ronda.wav
|
- TPV1_B1_UTT1_S_Trus_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Att_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Baseneutral_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Cogen_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Emre_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Reci_colla_ambivalex_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Reci_colla_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Reci_SP-Ronda.wav
|
- TPV1_B1_UTT2_S_Trus_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Att_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Baseneutral_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Cogen_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Emre_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Reci_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Trus_calm_ambivalex_SP-Ronda.wav
|
- TPV1_B1_UTT3_S_Trus_calm_SP-Ronda.wav
|
TonalityPrint Voice Dataset v1.0
A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment
π₯ DOWNLOAD DATASET FILES
β οΈ This Hugging Face repository contains DOCUMENTATION ONLY.
Download audio and annotation files from Zenodo (official source):
https://doi.org/10.5281/zenodo.17913895Why Zenodo?
- β Official DOI for academic citations
- β Permanent archival storage
- β Download statistics for grant reporting
- β Academic credibility
Quick Download: DATACARD.zip (42.9 MB)
Overview
TonalityPrint is a specialized single-speaker speech corpus designed to enable exploration of fine-tuning functional tonal intents in voice AI systems. Unlike emotion recognition datasets, TonalityPrint annotates functional tonal intents (what speakers do with tone), not just what they feel.
Key Features:
- 144 high-fidelity WAV files (48kHz, 16-bit, mono, unprocessed)
- 18 unique utterances across 8 parallel prosodic states
- 5 functional tonal intents: Trust, Attention, Reciprocity, Empathy Resonance, Cognitive Energy
- Continuous intensity indices (0-100 scale) for each intent
- Ambivalence annotation (perceptual entropy cross-intent feature)
- 100% authentic human voice with explicit consent
- Single-speaker design eliminates speaker variability for controlled analysis
What This Dataset Is:
- A precision-tuning resource for prosodic AI alignment research
- A controlled substrate for investigating functional tonal intent
- An experimental framework for ambivalence-aware dialogue systems
- A hypothesis-generating tool for human-AI voice calibration
What This Dataset Is Not:
- A general-purpose emotion recognition training corpus
- A multi-speaker dataset for population-level generalization
- A substitute for large-scale speech datasets
- A validated benchmark for production systems
Dataset Composition
Structure
From Zenodo Download:
DATACARD/
βββ audio/ # 144 WAV files
βββ annotations/
β βββ json/ # 144 JSON files
β βββ csv/ # 144 CSV files
β βββ ALL_TONALITY_DATA_COMBINED.csv # Combined dataset
βββ documentation/ # Technical references
Audio Specifications
| Specification | Value |
|---|---|
| Format | WAV (uncompressed PCM) |
| Sample Rate | 48,000 Hz (48kHz) |
| Bit Depth | 16-bit |
| Channels | Mono (1 channel) |
| Duration per File | 3-6 seconds |
| Total Duration | ~11 minutes 5 seconds |
| Processing | None (raw, unprocessed) |
| Total Files | 144 audio samples |
Fixed-Phrase Octet Design
The dataset uses a Fixed-Phrase Octet structure: 18 utterances Γ 8 parallel prosodic states.
Each utterance is recorded in:
- Baseline/Neutral (control sample)
- Trust (Trus) - conveying reliability and credibility
- Attention (Att) - directing focus and engagement
- Reciprocity (Reci) - expressing mutual exchange
- Empathy Resonance (Emre) - demonstrating empathetic connection
- Cognitive Energy (Cogen) - showing mental engagement
- Sub-modified variants (e.g., Trust + Calm)
- Ambivalence variants (optional cross-intent complexity)
This design enables:
- Differential Latent Analysis (DLA): Isolate prosodic features while holding lexical content constant
- Contrastive learning: Compare prosodic variations across identical text
- Intent vector extraction: Model functional intent as steerable features
Controlled Semantic Design
Functional Tonal Intents (Not Emotions)
TonalityPrint distinguishes between functional intent and affective state:
| Functional Intent | What It Does | Not The Same As |
|---|---|---|
| Trust | Establishes credibility, reliability | "Happiness" or "Confidence" |
| Attention | Directs focus, maintains engagement | "Excitement" or "Urgency" |
| Reciprocity | Invites response, balances exchange | "Friendliness" or "Agreement" |
| Empathy Resonance | Attunes to listener state | "Sympathy" or "Sadness" |
| Cognitive Energy | Signals mental activation | "Enthusiasm" or "Anxiety" |
Why This Matters:
- Traditional emotion datasets label what speakers feel
- TonalityPrint annotates what speakers do with their voice
- This functional framing aligns with conversational AI goals
Ambivalence as Feature (Not Noise)
Unlike traditional datasets that discard mixed signals as annotation errors, TonalityPrint systematically annotates ambivalence (ambivalex) as:
- A perceptual entropy transitional state
- A cross-intent feature where competing tonal cues co-occur
- An essential signal for real-world inference-time alignment
How to Use
Download from Zenodo
# 1. Visit Zenodo
https://doi.org/10.5281/zenodo.17913895
# 2. Download DATACARD.zip (42.9 MB)
# 3. Extract files
unzip DATACARD.zip
Load Annotations
import pandas as pd
# Load combined CSV
df = pd.read_csv('DATACARD/annotations/ALL_TONALITY_DATA_COMBINED.csv')
print(f"Total samples: {len(df)}")
print(f"Columns: {df.columns.tolist()}")
# Filter by intention
trust_samples = df[df['Primary_Intention'] == 'Trust']
ambivalent_samples = df[df['Ambivalex'] == 'ambivalex']
Load Audio Files
import librosa
# Load audio file
audio_path = 'DATACARD/audio/TPV1_B1_UTT1_S_Att_SP-Ronda.wav'
audio, sr = librosa.load(audio_path, sr=48000, mono=True)
print(f"Sample rate: {sr} Hz")
print(f"Duration: {len(audio)/sr:.2f} seconds")
# Extract features
mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13)
Explore Tonality Indices
# Compare Trust scores across utterances
trust_scores = df.groupby('Utterance_Number')['Trust_Index'].mean()
# Analyze Cognitive Energy bias
ce_by_intent = df.groupby('Primary_Intention')['Cognitive_Energy_Index'].describe()
Annotation Methodology
Expert Practitioner Annotation
Annotator: Ronda Polhill (speaker and dataset creator)
Method: Expert perceptual assessment combined with acoustic analysis
Expertise: 8,873+ high-stakes customer interactions (observational context, not causal proof)
Continuous Indices (0-100 Scale)
Each utterance includes five tonality indices:
| Index | Abbreviation | Interpretation |
|---|---|---|
| Trust | TR | 0-30: Low/Minimal, 31-60: Moderate, 61-85: High, 86-100: Very High |
| Attention | AT | Perceptual score of attentional focus |
| Reciprocity | RE | Perceptual score of collaborative tone |
| Empathy Resonance | ER | Perceptual score of empathetic attunement |
| Cognitive Energy | CE | Perceptual score of mental activation |
Important: These are annotator perceptual scores, not empirically validated scales.
Quality Control
- Proprietary heuristic audit: ~80+% acoustic-intent alignment verified
- Re-recording rate: ~18.05% of corpus re-recorded for consistency
- Known bias: Cognitive Energy shows systematic elevation (documented and retained)
Intended Use
Primary Applications
Inference-Time Prosodic Alignment
- Fine-tuning reasoning-based voice models
- Aligning model confidence with vocal uncertainty
- Calibrating trust signals in AI responses
Differential Latent Analysis
- Extracting tonal intent vectors (analogous to LLM activation steering)
- Contrastive learning with fixed lexical content
- Isolating prosodic features from semantic content
Ambivalence-Aware Systems
- Training dialogue systems to detect mixed signals
- Modeling uncertainty in safety-critical applications
- Navigating tonal complexity in nuanced interactions
Style-Conditioned Synthesis
- Controlling prosodic style in TTS systems
- Evaluating voice quality metrics
- Transfer learning for expressive speech
Human-AI Voice Calibration
- Investigating "AI-adjacent yet trusted" vocal profiles
- Studying uncanny valley effects in voice
- Exploring voice-appearance synchrony in embodied AI
Known Biases and Limitations
Single-Speaker Constraint
- All 144 files from same speaker (Ronda Polhill)
- Findings may not generalize across:
- Genders, ages, accents, cultures, languages
- Multi-speaker validation required for broader applicability
Cognitive Energy Systematic Bias
Known Issue: Cognitive Energy Index shows systematic elevation across corpus.
Possible Causes:
- Speaker's natural ecological style (high-energy delivery)
- Lexical content effects
- Practitioner annotation bias
Resolution: Intentionally retained for transparency. Researchers should account for this bias in analyses.
Controlled Environment
- Professional studio recordings (not naturalistic)
- Scripted content (not spontaneous speech)
- May not reflect real-world acoustic conditions
Observational Context (Not Causal Proof)
The annotation methodology references 8,873+ customer interactions with observed correlations:
- ~35.85% average conversion rate (observational metric)
- 68 spontaneous reports of "AI-adjacent" voice quality with high trust ratings
Critical Caveat: These are observational correlations, not causal relationships. Multiple confounding variables present.
Ethical Considerations
Speaker Consent and Biometric Integrity
- 100% human recordings by author (Ronda Polhill)
- Explicit informed consent for recording, annotation, and public release
- No synthetic voices, clones, or generative AI audio
- Speaker demographics: Mid-life female, native English speaker
Prohibited Uses
Researchers are strictly prohibited from:
- Creating unauthorized voice clones of the speaker
- Generating deepfakes using this dataset
- Using recordings for deceptive purposes
- Violating CC BY-NC 4.0 license terms
Links
- π Official Dataset (Zenodo): https://doi.org/10.5281/zenodo.17913895
- π Documentation (GitHub): https://github.com/YOUR_USERNAME/TonalityPrint-v1
- π White Paper: https://doi.org/10.5281/zenodo.17410581
- π Website: https://TonalityPrint.com
- π§ Contact: ronda@TonalityPrint.com
Citation
BibTeX
@dataset{polhill_2026_tonalityprint,
author = {Polhill, Ronda},
title = {TonalityPrint: A Contrast-Structured Voice Dataset
for Exploring Functional Tonal Intent, Ambivalence,
and Inference-Time Prosodic Alignment v1.0},
year = 2026,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.17913895},
url = {https://doi.org/10.5281/zenodo.17913895}
}
APA
Polhill, R. (2026). TonalityPrint: A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment v1.0 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.17913895
License
CC BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International)
- β Academic and research use: FREE
- β Proper attribution required
- β Commercial use: Requires licensing
Commercial licensing: Contact ronda@TonalityPrint.com
Acknowledgments
This work emerges from independent practitioner-research conducted without institutional funding and is released for academic research use under CC BY-NC 4.0.
TonalityPrint aims to address a critical gap in voice AI training data by moving beyond discrete emotion recognition to capture functional tonal intent, including ambivalent prosodic signals as essential nuances for inference-time alignment.
Version: 1.0.0
Release Date: January 24, 2026
Last Updated: January 30, 2026
License: CC BY-NC 4.0
Β© 2026 Ronda Polhill
- Downloads last month
- -