Datasets:
Tasks:
Audio Classification
Modalities:
Text
Formats:
text
Sub-tasks:
audio-intent-classification
Languages:
English
Size:
< 1K
License:
Upload 12 files
Browse files- CITATION.cff +93 -0
- CODEBOOK.md +595 -0
- DOWNLOAD_DATA.md +166 -0
- LICENSE +244 -0
- MANIFEST.txt +301 -0
- METHODOLOGY.md +579 -0
- README.md +472 -3
- continuous_indices.txt +29 -0
- scripts.txt +24 -0
- speaker_profile.txt +12 -0
- tech_specs.txt +17 -0
- transcripts.txt +24 -0
CITATION.cff
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
cff-version: 1.2.0
|
| 2 |
+
message: "If you use this dataset, please cite it as below."
|
| 3 |
+
type: dataset
|
| 4 |
+
title: "TonalityPrint: A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment v1.0"
|
| 5 |
+
version: 1.0.0
|
| 6 |
+
doi: 10.5281/zenodo.17913895
|
| 7 |
+
date-released: 2026-01-24
|
| 8 |
+
url: "https://doi.org/10.5281/zenodo.17913895"
|
| 9 |
+
repository-code: "https://github.com/TonalityPrint/TonalityPrint-v1"
|
| 10 |
+
license: CC-BY-NC-4.0
|
| 11 |
+
authors:
|
| 12 |
+
- family-names: Polhill
|
| 13 |
+
given-names: Ronda
|
| 14 |
+
email: ronda@TonalityPrint.com
|
| 15 |
+
affiliation: Independent Researcher
|
| 16 |
+
orcid: ""
|
| 17 |
+
keywords:
|
| 18 |
+
- tonality
|
| 19 |
+
- inference
|
| 20 |
+
- ambivalence detection
|
| 21 |
+
- functional tonal intent
|
| 22 |
+
- voice dataset
|
| 23 |
+
- prosody dataset
|
| 24 |
+
- human-AI communication
|
| 25 |
+
- conversational AI
|
| 26 |
+
- single speaker dataset
|
| 27 |
+
- tonality regression
|
| 28 |
+
- voice AI
|
| 29 |
+
- voice alignment
|
| 30 |
+
- fine tuning
|
| 31 |
+
- AI safety
|
| 32 |
+
- autonomous systems
|
| 33 |
+
- sycophancy-mitigation
|
| 34 |
+
- voice agents
|
| 35 |
+
- personalized AI
|
| 36 |
+
- embodied AI
|
| 37 |
+
- companion AI
|
| 38 |
+
- ethical voice data
|
| 39 |
+
- expressive synthesis
|
| 40 |
+
- humanoid robotics
|
| 41 |
+
- prosodic interpretability
|
| 42 |
+
- intent aligned dataset
|
| 43 |
+
- inference-time prosodic alignment
|
| 44 |
+
- trust calibration
|
| 45 |
+
- fine-tuning dataset
|
| 46 |
+
- human voice dataset
|
| 47 |
+
- intent drift
|
| 48 |
+
- tonal alignment
|
| 49 |
+
- agentic AI
|
| 50 |
+
- outcome inference
|
| 51 |
+
- human-AI alignment
|
| 52 |
+
- uncanny-valley-effect
|
| 53 |
+
- prosodic trust
|
| 54 |
+
- prosodic intentionality
|
| 55 |
+
- safety alignment
|
| 56 |
+
- prosodic style transfer
|
| 57 |
+
- empathetic AI
|
| 58 |
+
- humanoid voice appearance
|
| 59 |
+
- human-in-the-loop
|
| 60 |
+
- human baseline
|
| 61 |
+
- real-world experience
|
| 62 |
+
abstract: >
|
| 63 |
+
TonalityPrint is a specialized single-speaker speech corpus designed
|
| 64 |
+
to enable exploration of fine-tuning functional tonal intents - Trust,
|
| 65 |
+
Attention, Reciprocity, Empathy Resonance, and Cognitive Energy - in
|
| 66 |
+
voice AI systems. Unlike emotion recognition datasets, TonalityPrint
|
| 67 |
+
annotates functional tonal intents (what speakers do with tone), not
|
| 68 |
+
just what they feel. The dataset provides 144 audio samples across 18
|
| 69 |
+
utterances, each recorded in 8 parallel prosodic states. A core innovation
|
| 70 |
+
is systematic ambivalence annotation, treating tonal complexity as a
|
| 71 |
+
perceptual entropy cross-intent feature rather than noise. Utilizing a
|
| 72 |
+
Fixed-Phrase Octet design, the dataset enables Differential Latent Analysis
|
| 73 |
+
(DLA) for contrastive approximation of tonal intent vectors. Annotations
|
| 74 |
+
are grounded in 8,873+ consequential interactions, capturing an AI-adjacent
|
| 75 |
+
yet trusted vocal profile. TonalityPrint is intended as a hypothesized
|
| 76 |
+
contrast substrate for researchers exploring inference-time alignment,
|
| 77 |
+
prosodic interpretability, style-conditioned synthesis, human-AI voice
|
| 78 |
+
calibration, and safety-critical voice agents. All recordings are 100%
|
| 79 |
+
authentic human voice (author) with explicit consent, released under CC
|
| 80 |
+
BY-NC 4.0 (academic/research free; commercial licensing available).
|
| 81 |
+
references:
|
| 82 |
+
- type: article
|
| 83 |
+
title: "Tonality as Attention"
|
| 84 |
+
authors:
|
| 85 |
+
- family-names: Polhill
|
| 86 |
+
given-names: Ronda
|
| 87 |
+
year: 2025
|
| 88 |
+
publisher:
|
| 89 |
+
name: Zenodo
|
| 90 |
+
doi: 10.5281/zenodo.17410581
|
| 91 |
+
contact:
|
| 92 |
+
- email: ronda@TonalityPrint.com
|
| 93 |
+
name: Ronda Polhill
|
CODEBOOK.md
ADDED
|
@@ -0,0 +1,595 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CODEBOOK - TonalityPrint Voice Dataset v1.0
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
This codebook provides definitions for variables, file naming conventions, and data structures in the TonalityPrint Voice Dataset v1.0.
|
| 6 |
+
|
| 7 |
+
**Dataset Information**:
|
| 8 |
+
- **Total Files**: 144 audio files + 144 JSON + 144 CSV + 1 combined CSV
|
| 9 |
+
- **DOI**: https://doi.org/10.5281/zenodo.17913895
|
| 10 |
+
- **License**: CC BY-NC 4.0
|
| 11 |
+
- **Contact**: ronda@TonalityPrint.com
|
| 12 |
+
|
| 13 |
+
**Quick Navigation**:
|
| 14 |
+
- [File Naming Convention](#file-naming-convention)
|
| 15 |
+
- [CSV Variables](#csv-variables-23-columns)
|
| 16 |
+
- [Tonality Indices](#tonality-indices-0-100-scale)
|
| 17 |
+
- [Intention Categories](#intention-categories)
|
| 18 |
+
- [Modifier Codes](#modifier-codes-24-optional-sub-modifiers)
|
| 19 |
+
- [Segment-Level Data](#segment-level-data-structure)
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
## File Naming Convention
|
| 24 |
+
|
| 25 |
+
### Audio Files (.wav)
|
| 26 |
+
|
| 27 |
+
**Structure**:
|
| 28 |
+
```
|
| 29 |
+
[Version]_[Batch]_[Utterance]_[Type]_[Intention]_[Modifier]_[Ambivalence]_[Speaker].wav
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
**Examples**:
|
| 33 |
+
1. **Single** (Primary Intent only):
|
| 34 |
+
`TPV1_B1_UTT1_S_Att_SP-Ronda.wav`
|
| 35 |
+
|
| 36 |
+
2. **Compound** (Primary Intent + Sub-modifier):
|
| 37 |
+
`TPV1_B1_UTT1_S_Reci_affi_SP-Ronda.wav`
|
| 38 |
+
|
| 39 |
+
3. **Complex** (Primary Intent + Sub-modifier + Ambivalence):
|
| 40 |
+
`TPV1_B1_UTT1_S_Reci_affi_ambivalex_SP-Ronda.wav`
|
| 41 |
+
|
| 42 |
+
### Component Definitions
|
| 43 |
+
|
| 44 |
+
| Component | Description | Valid Values | Example |
|
| 45 |
+
|-----------|-------------|--------------|---------|
|
| 46 |
+
| **Version** | Dataset version | `TPV1` | TPV1 |
|
| 47 |
+
| **Batch** | Batch number (1-6) | `B1`, `B2`, `B3`, `B4`, `B5`, `B6` | B1 |
|
| 48 |
+
| **Utterance** | Utterance ID (1-18) | `UTT1` through `UTT18` | UTT1 |
|
| 49 |
+
| **Type** | Statement/Question | `S` (Statement), `Q` (Question) | S |
|
| 50 |
+
| **Intention** | Primary tonal intent | `Att`, `Trus`, `Reci`, `Emre`, `Cogen`, `Baseneutral` | Att |
|
| 51 |
+
| **Modifier** | Optional sub-modifier | See [Modifier Codes](#modifier-codes-24-optional-sub-modifiers) | affi, calm |
|
| 52 |
+
| **Ambivalence** | Ambivalence marker | `ambivalex` (or omitted) | ambivalex |
|
| 53 |
+
| **Speaker** | Speaker identifier | `SP-Ronda` | SP-Ronda |
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
## CSV Variables (23 Columns)
|
| 58 |
+
|
| 59 |
+
### Complete Variable List
|
| 60 |
+
|
| 61 |
+
The combined CSV file (`ALL_TONALITY_DATA_COMBINED.csv`) and individual CSV files contain these 23 variables:
|
| 62 |
+
|
| 63 |
+
| # | Variable Name | Type | Description |
|
| 64 |
+
|---|--------------|------|-------------|
|
| 65 |
+
| 1 | `Version` | String | Dataset version identifier |
|
| 66 |
+
| 2 | `Batch_Number` | String | Batch identifier (B1-B6) |
|
| 67 |
+
| 3 | `Utterance_Number` | String | Utterance identifier (UTT1-UTT18) |
|
| 68 |
+
| 4 | `Utterance_Type` | String | S (Statement) or Q (Question) |
|
| 69 |
+
| 5 | `File_Name` | String | Complete audio filename |
|
| 70 |
+
| 6 | `Primary_Intention` | String | Primary tonal intent category |
|
| 71 |
+
| 7 | `Sub_Modifier` | String | Optional sub-modifier (or empty) |
|
| 72 |
+
| 8 | `Ambivalex` | String | Ambivalence marker (or empty) |
|
| 73 |
+
| 9 | `Speaker` | String | Speaker name |
|
| 74 |
+
| 10 | `Utterance_Text` | String | Transcribed utterance text |
|
| 75 |
+
| 11 | `Trust_Index` | Integer | Trust tonality score (0-100) |
|
| 76 |
+
| 12 | `Reciprocity_Index` | Integer | Reciprocity score (0-100) |
|
| 77 |
+
| 13 | `Empathy_Resonance_Index` | Integer | Empathy resonance score (0-100) |
|
| 78 |
+
| 14 | `Cognitive_Energy_Index` | Integer | Cognitive energy score (0-100) |
|
| 79 |
+
| 15 | `Attention_Index` | Integer | Attention score (0-100) |
|
| 80 |
+
| 16 | `Notes` | String | Annotation notes and observations |
|
| 81 |
+
| 17 | `Duration` | Time | Utterance duration (MM:SS format) |
|
| 82 |
+
| 18 | `Date_Recorded` | Date | Recording date (YYYY-MM-DD) |
|
| 83 |
+
| 19 | `Source` | String | Data source description |
|
| 84 |
+
| 20 | `Segments` | JSON String | Time-aligned segment data |
|
| 85 |
+
| 21 | `Start_Time` | Time | Utterance start time (MM:SS) |
|
| 86 |
+
| 22 | `End_Time` | Time | Utterance end time (MM:SS) |
|
| 87 |
+
| 23 | `Timestamp` | DateTime | ISO 8601 timestamp |
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## Variable Definitions (Detailed)
|
| 92 |
+
|
| 93 |
+
### Metadata Variables
|
| 94 |
+
|
| 95 |
+
#### 1. Version
|
| 96 |
+
- **Type**: String
|
| 97 |
+
- **Description**: Dataset version identifier
|
| 98 |
+
- **Values**: `"TPV1"` (TonalityPrint Version 1)
|
| 99 |
+
- **Example**: `TPV1`
|
| 100 |
+
|
| 101 |
+
#### 2. Batch_Number
|
| 102 |
+
- **Type**: String
|
| 103 |
+
- **Description**: Recording batch identifier
|
| 104 |
+
- **Values**: `B1`, `B2`, `B3`, `B4`, `B5`, `B6`
|
| 105 |
+
- **Total Batches**: 6
|
| 106 |
+
- **Utterances per Batch**: 18
|
| 107 |
+
- **Example**: `B1`
|
| 108 |
+
|
| 109 |
+
#### 3. Utterance_Number
|
| 110 |
+
- **Type**: String
|
| 111 |
+
- **Description**: Unique utterance identifier within each batch
|
| 112 |
+
- **Values**: `UTT1`, `UTT2`, ..., `UTT18`
|
| 113 |
+
- **Example**: `UTT1`
|
| 114 |
+
|
| 115 |
+
#### 4. Utterance_Type
|
| 116 |
+
- **Type**: String (Categorical)
|
| 117 |
+
- **Description**: Syntactic type of the utterance
|
| 118 |
+
- **Values**:
|
| 119 |
+
- `S` = Statement (declarative sentence)
|
| 120 |
+
- `Q` = Question (interrogative sentence)
|
| 121 |
+
- **Distribution**: ~83% Statements, ~17% Questions
|
| 122 |
+
- **Example**: `S`
|
| 123 |
+
|
| 124 |
+
#### 5. File_Name
|
| 125 |
+
- **Type**: String
|
| 126 |
+
- **Description**: Complete audio filename with extension
|
| 127 |
+
- **Format**: `TPV1_[Batch]_[Utterance]_[Type]_[Intention]_[Modifier]_[Ambivalence]_SP-Ronda.wav`
|
| 128 |
+
- **Example**: `TPV1_B1_UTT1_S_Att_SP-Ronda.wav`
|
| 129 |
+
|
| 130 |
+
#### 6. Primary_Intention
|
| 131 |
+
- **Type**: String (Categorical)
|
| 132 |
+
- **Description**: Primary functional tonal intent category
|
| 133 |
+
- **Values**:
|
| 134 |
+
- `Attention` (directing focus and engagement)
|
| 135 |
+
- `Trust` (conveying reliability and credibility)
|
| 136 |
+
- `Reciprocity` (expressing mutual exchange)
|
| 137 |
+
- `Empathy Resonance` (demonstrating empathetic connection)
|
| 138 |
+
- `Cognitive Energy` (showing mental engagement)
|
| 139 |
+
- `Baseline Neutral` (neutral control sample)
|
| 140 |
+
- **Note**: Full word used in CSV (e.g., "Attention"), abbreviated in filename (e.g., "Att")
|
| 141 |
+
- **Example**: `Attention`
|
| 142 |
+
|
| 143 |
+
#### 7. Sub_Modifier
|
| 144 |
+
- **Type**: String (Optional)
|
| 145 |
+
- **Description**: Optional sub-modifier providing nuanced tonality descriptor
|
| 146 |
+
- **Values**: See [Modifier Codes](#modifier-codes-24-optional-sub-modifiers) table
|
| 147 |
+
- **Missing Data**: Empty string if not applicable
|
| 148 |
+
- **Example**: `affi` (Affirming), empty string `""`
|
| 149 |
+
|
| 150 |
+
#### 8. Ambivalex
|
| 151 |
+
- **Type**: String (Optional)
|
| 152 |
+
- **Description**: Cross-modifier Ambivalence marker indicating mixed or transitional tonality
|
| 153 |
+
- **Values**:
|
| 154 |
+
- `ambivalex` = Ambivalence present
|
| 155 |
+
- Empty string = No ambivalence
|
| 156 |
+
- **Definition**: Two or more contradictory/competing sub-modifier layers present simultaneously
|
| 157 |
+
- **Example**: `ambivalex`, empty string `""`
|
| 158 |
+
|
| 159 |
+
#### 9. Speaker
|
| 160 |
+
- **Type**: String
|
| 161 |
+
- **Description**: Speaker identifier
|
| 162 |
+
- **Values**: `Ronda`
|
| 163 |
+
- **Note**: Single-speaker dataset (all 144 files same speaker)
|
| 164 |
+
- **Example**: `Ronda`
|
| 165 |
+
|
| 166 |
+
#### 10. Utterance_Text
|
| 167 |
+
- **Type**: String
|
| 168 |
+
- **Description**: Verbatim transcription of spoken utterance
|
| 169 |
+
- **Encoding**: UTF-8
|
| 170 |
+
- **Max Length**: ~200 characters
|
| 171 |
+
- **Example**: `"I want to make sure I understand what you need"`
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
### Tonality Indices (0-100 Scale)
|
| 176 |
+
|
| 177 |
+
All five tonality indices are measured on a continuous 0-100 scale where higher values indicate stronger presence of the measured tonal quality.
|
| 178 |
+
|
| 179 |
+
#### 11. Trust_Index
|
| 180 |
+
- **Type**: Integer
|
| 181 |
+
- **Range**: 0-100
|
| 182 |
+
- **Description**: Quantified measure of trust tonality (perceived safety, authenticity, credibility)
|
| 183 |
+
- **Interpretation**:
|
| 184 |
+
- **Low (0-33)**: Uncertain, hesitant tonality
|
| 185 |
+
- **Moderate (34-66)**: Moderately reliable tonality
|
| 186 |
+
- **High (67-100)**: Highly trustworthy tonality
|
| 187 |
+
- **Example**: `75`
|
| 188 |
+
|
| 189 |
+
#### 12. Reciprocity_Index
|
| 190 |
+
- **Type**: Integer
|
| 191 |
+
- **Range**: 0-100
|
| 192 |
+
- **Description**: Quantified measure of reciprocal/collaborative tonality (inviting response, conversational balance)
|
| 193 |
+
- **Interpretation**:
|
| 194 |
+
- **Low (0-33)**: Unilateral communication
|
| 195 |
+
- **Moderate (34-66)**: Somewhat collaborative
|
| 196 |
+
- **High (67-100)**: Highly collaborative, balanced
|
| 197 |
+
- **Example**: `93`
|
| 198 |
+
|
| 199 |
+
#### 13. Empathy_Resonance_Index
|
| 200 |
+
- **Type**: Integer
|
| 201 |
+
- **Range**: 0-100
|
| 202 |
+
- **Description**: Quantified measure of empathetic tonality (emotional attunement, mirroring listener state)
|
| 203 |
+
- **Interpretation**:
|
| 204 |
+
- **Low (0-33)**: Detached, impersonal
|
| 205 |
+
- **Moderate (34-66)**: Moderately attuned
|
| 206 |
+
- **High (67-100)**: Highly empathetic, warm
|
| 207 |
+
- **Example**: `76`
|
| 208 |
+
|
| 209 |
+
#### 14. Cognitive_Energy_Index
|
| 210 |
+
- **Type**: Integer
|
| 211 |
+
- **Range**: 0-100
|
| 212 |
+
- **Description**: Quantified measure of cognitive engagement and mental energy (activation, momentum, pacing)
|
| 213 |
+
- **Interpretation**:
|
| 214 |
+
- **Low (0-33)**: Low engagement, slow pacing
|
| 215 |
+
- **Moderate (34-66)**: Moderate engagement
|
| 216 |
+
- **High (67-100)**: High mental energy, dynamic
|
| 217 |
+
- **Known Issue**: Shows systematic elevation across corpus (see Notes)
|
| 218 |
+
- **Example**: `96`
|
| 219 |
+
|
| 220 |
+
#### 15. Attention_Index
|
| 221 |
+
- **Type**: Integer
|
| 222 |
+
- **Range**: 0-100
|
| 223 |
+
- **Description**: Quantified measure of attentional focus (directing perceptual priority, maintaining engagement)
|
| 224 |
+
- **Interpretation**:
|
| 225 |
+
- **Low (0-33)**: Unfocused, diffuse attention
|
| 226 |
+
- **Moderate (34-66)**: Moderately engaged
|
| 227 |
+
- **High (67-100)**: Highly focused, commanding attention
|
| 228 |
+
- **Example**: `80`
|
| 229 |
+
|
| 230 |
+
**Scoring Methodology**: All indices were scored by expert practitioner trained in "Tonality as Attention" framework based on perceptual assessment and acoustic analysis.
|
| 231 |
+
|
| 232 |
+
---
|
| 233 |
+
|
| 234 |
+
### Additional Variables
|
| 235 |
+
|
| 236 |
+
#### 16. Notes
|
| 237 |
+
- **Type**: String (Free text)
|
| 238 |
+
- **Description**: Annotation notes, quality observations, and systematic bias documentation
|
| 239 |
+
- **Common Note**: "Cognitive Energy (CE) seemingly exhibits systemic leaks/dominance, possibly due to speaker ecological style, lexical content and /or practitioner bias. Intentionally retained for transparency."
|
| 240 |
+
- **Missing Data**: Empty string if no notes
|
| 241 |
+
- **Example**: `"Cognitive Energy (CE) seemingly exhibits systemic leaks/dominance..."`
|
| 242 |
+
|
| 243 |
+
#### 17. Duration
|
| 244 |
+
- **Type**: Time (MM:SS format)
|
| 245 |
+
- **Description**: Total duration of audio utterance
|
| 246 |
+
- **Format**: `M:SS` or `MM:SS`
|
| 247 |
+
- **Range**: ~3-6 seconds per utterance
|
| 248 |
+
- **Total Duration**: ~10 minutes (all 144 files)
|
| 249 |
+
- **Example**: `0:04` (4 seconds)
|
| 250 |
+
|
| 251 |
+
#### 18. Date_Recorded
|
| 252 |
+
- **Type**: Date (YYYY-MM-DD)
|
| 253 |
+
- **Description**: Date the audio was recorded
|
| 254 |
+
- **Date Range**: December 19, 2025 - January 23, 2026
|
| 255 |
+
- **Example**: `2026-01-20`
|
| 256 |
+
|
| 257 |
+
#### 19. Source
|
| 258 |
+
- **Type**: String
|
| 259 |
+
- **Description**: Data source and annotation method
|
| 260 |
+
- **Values**: `"Recording - Expert Practitioner Annotator"`
|
| 261 |
+
- **Note**: All annotations performed by single expert practitioner
|
| 262 |
+
- **Example**: `Recording - Expert Practitioner Annotator`
|
| 263 |
+
|
| 264 |
+
#### 20. Segments
|
| 265 |
+
- **Type**: JSON Array (stored as string in CSV)
|
| 266 |
+
- **Description**: Time-aligned segment-level tonality data with millisecond precision
|
| 267 |
+
- **Structure**: Array of objects with `startTime`, `endTime`, and five tonality indices
|
| 268 |
+
- **See**: [Segment-Level Data Structure](#segment-level-data-structure) section
|
| 269 |
+
- **Example**: `[{"startTime":0,"endTime":4284.083333333333,"trust":75,"reciprocity":93,"empathy":76,"cognitive":96,"attention":80}]`
|
| 270 |
+
|
| 271 |
+
#### 21. Start_Time
|
| 272 |
+
- **Type**: Time (MM:SS format)
|
| 273 |
+
- **Description**: Utterance start time (typically 0:00)
|
| 274 |
+
- **Example**: `0:00`
|
| 275 |
+
|
| 276 |
+
#### 22. End_Time
|
| 277 |
+
- **Type**: Time (MM:SS format)
|
| 278 |
+
- **Description**: Utterance end time (matches Duration)
|
| 279 |
+
- **Example**: `0:04`
|
| 280 |
+
|
| 281 |
+
#### 23. Timestamp
|
| 282 |
+
- **Type**: DateTime (ISO 8601 format)
|
| 283 |
+
- **Description**: Precise timestamp of annotation creation
|
| 284 |
+
- **Format**: `YYYY-MM-DDTHH:MM:SS.sssZ`
|
| 285 |
+
- **Timezone**: UTC (Z suffix)
|
| 286 |
+
- **Example**: `2026-01-20T16:45:24.342Z`
|
| 287 |
+
|
| 288 |
+
---
|
| 289 |
+
|
| 290 |
+
## Intention Categories
|
| 291 |
+
|
| 292 |
+
### Primary Functional Tonal Intent States (6 Categories)
|
| 293 |
+
|
| 294 |
+
| Category | Code (Filename) | Full Name (CSV) | Description |
|
| 295 |
+
|----------|----------------|-----------------|-------------|
|
| 296 |
+
| **Attention** | `Att` | `Attention` | Directing focus, capturing and maintaining listener engagement |
|
| 297 |
+
| **Trust** | `Trus` | `Trust` | Conveying trustworthiness, reliability, credibility, and authenticity |
|
| 298 |
+
| **Reciprocity** | `Reci` | `Reciprocity` | Expressing mutual exchange, collaborative communication, inviting response |
|
| 299 |
+
| **Empathy Resonance** | `Emre` | `Empathy Resonance` | Demonstrating empathetic connection, emotional attunement, warmth |
|
| 300 |
+
| **Cognitive Energy** | `Cogen` | `Cognitive Energy` | Showing mental engagement, cognitive processing, activation, momentum |
|
| 301 |
+
| **Baseline Neutral** | `Baseneutral` | `Baseline Neutral` | Neutral control sample, default prosody for comparative analysis |
|
| 302 |
+
|
| 303 |
+
**Capitalization Rules**:
|
| 304 |
+
- First letter capitalized in filenames: `Att`, `Cogen`
|
| 305 |
+
- Full words in CSV: `Attention`, `Cognitive Energy`
|
| 306 |
+
- Baseline: `Baseneutral` (one word, capital B)
|
| 307 |
+
|
| 308 |
+
---
|
| 309 |
+
|
| 310 |
+
## Modifier Codes (24 Optional Sub-Modifiers)
|
| 311 |
+
|
| 312 |
+
### 1. Trust Modifiers (5)
|
| 313 |
+
|
| 314 |
+
| Code | Full Name | Description |
|
| 315 |
+
|------|-----------|-------------|
|
| 316 |
+
| `auth` | Authoritative | Commanding, expert tone |
|
| 317 |
+
| `calm` | Calm | Soothing, measured tone |
|
| 318 |
+
| `conf` | Confident | Self-assured, certain tone |
|
| 319 |
+
| `rest` | Formal/Respectful | Professional, courteous tone |
|
| 320 |
+
| `reas` | Reassuring | Comforting, supportive tone |
|
| 321 |
+
|
| 322 |
+
### 2. Attention Modifiers (5)
|
| 323 |
+
|
| 324 |
+
| Code | Full Name | Description |
|
| 325 |
+
|------|-----------|-------------|
|
| 326 |
+
| `cert` | Certainty | Confident, definite tone |
|
| 327 |
+
| `clar` | Clarity | Clear, precise communication |
|
| 328 |
+
| `curi` | Curious | Inquisitive, interested tone |
|
| 329 |
+
| `focu` | Focused | Concentrated, directed attention |
|
| 330 |
+
| `urge` | Urgent/Pressure | Time-sensitive, pressing tone |
|
| 331 |
+
|
| 332 |
+
### 3. Reciprocity Modifiers (5)
|
| 333 |
+
|
| 334 |
+
| Code | Full Name | Description |
|
| 335 |
+
|------|-----------|-------------|
|
| 336 |
+
| `affi` | Affirming | Validating, confirming tone |
|
| 337 |
+
| `colla` | Collaborative | Cooperative, team-oriented tone |
|
| 338 |
+
| `enga` | Engaged | Active, participatory tone |
|
| 339 |
+
| `open` | Open | Receptive, non-defensive tone |
|
| 340 |
+
| `refl` | Reflective | Thoughtful, contemplative tone |
|
| 341 |
+
|
| 342 |
+
### 4. Empathy Resonance Modifiers (5)
|
| 343 |
+
|
| 344 |
+
| Code | Full Name | Description |
|
| 345 |
+
|------|-----------|-------------|
|
| 346 |
+
| `casu` | Casual | Informal, relaxed tone |
|
| 347 |
+
| `comp` | Compassion | Kind, caring tone |
|
| 348 |
+
| `corr` | Corrective (softened) | Gentle correction or guidance |
|
| 349 |
+
| `symp` | Sympathetic | Understanding, supportive tone |
|
| 350 |
+
| `warm` | Warm | Friendly, approachable tone |
|
| 351 |
+
|
| 352 |
+
### 5. Cognitive Energy Modifiers (4)
|
| 353 |
+
|
| 354 |
+
| Code | Full Name | Description |
|
| 355 |
+
|------|-----------|-------------|
|
| 356 |
+
| `ana` | Analytical | Logical, reasoning-oriented tone |
|
| 357 |
+
| `dyna` | Dynamic | Energetic, active tone |
|
| 358 |
+
| `enth` | Enthusiastic | Excited, passionate tone |
|
| 359 |
+
| `skep` | Skeptical | Questioning, doubtful tone |
|
| 360 |
+
|
| 361 |
+
### Cross-Intent Modifier (1)
|
| 362 |
+
|
| 363 |
+
| Code | Full Name | Description |
|
| 364 |
+
|------|-----------|-------------|
|
| 365 |
+
| `ambivalex` | Ambivalence | Mixed, transitional, or competing tonal cues present simultaneously |
|
| 366 |
+
|
| 367 |
+
**Capitalization Rule**: All modifier codes are lowercase in filenames: `affi`, `warm`, `ana`, `ambivalex`
|
| 368 |
+
|
| 369 |
+
---
|
| 370 |
+
|
| 371 |
+
## Segment-Level Data Structure
|
| 372 |
+
|
| 373 |
+
### JSON Structure in "Segments" Field
|
| 374 |
+
|
| 375 |
+
Each utterance includes time-aligned segment-level tonality data stored as a JSON array string in the CSV.
|
| 376 |
+
|
| 377 |
+
**Structure**:
|
| 378 |
+
```json
|
| 379 |
+
[
|
| 380 |
+
{
|
| 381 |
+
"startTime": <milliseconds>,
|
| 382 |
+
"endTime": <milliseconds>,
|
| 383 |
+
"trust": <0-100>,
|
| 384 |
+
"reciprocity": <0-100>,
|
| 385 |
+
"empathy": <0-100>,
|
| 386 |
+
"cognitive": <0-100>,
|
| 387 |
+
"attention": <0-100>
|
| 388 |
+
}
|
| 389 |
+
]
|
| 390 |
+
```
|
| 391 |
+
|
| 392 |
+
**Real Example**:
|
| 393 |
+
```json
|
| 394 |
+
[{
|
| 395 |
+
"startTime": 0,
|
| 396 |
+
"endTime": 4284.083333333333,
|
| 397 |
+
"trust": 75,
|
| 398 |
+
"reciprocity": 93,
|
| 399 |
+
"empathy": 76,
|
| 400 |
+
"cognitive": 96,
|
| 401 |
+
"attention": 80
|
| 402 |
+
}]
|
| 403 |
+
```
|
| 404 |
+
|
| 405 |
+
### Segment Field Definitions
|
| 406 |
+
|
| 407 |
+
| Field | Type | Unit | Description |
|
| 408 |
+
|-------|------|------|-------------|
|
| 409 |
+
| `startTime` | Float | Milliseconds | Segment start time from utterance beginning |
|
| 410 |
+
| `endTime` | Float | Milliseconds | Segment end time from utterance beginning |
|
| 411 |
+
| `trust` | Integer | 0-100 | Trust tonality score for this segment |
|
| 412 |
+
| `reciprocity` | Integer | 0-100 | Reciprocity score for this segment |
|
| 413 |
+
| `empathy` | Integer | 0-100 | Empathy resonance score for this segment |
|
| 414 |
+
| `cognitive` | Integer | 0-100 | Cognitive energy score for this segment |
|
| 415 |
+
| `attention` | Integer | 0-100 | Attention score for this segment |
|
| 416 |
+
|
| 417 |
+
**Notes**:
|
| 418 |
+
- Most utterances contain a single segment (entire utterance)
|
| 419 |
+
- Times in milliseconds with decimal precision
|
| 420 |
+
- Segment scores may differ from utterance-level indices in multi-segment utterances
|
| 421 |
+
- To convert milliseconds to seconds: `seconds = milliseconds / 1000`
|
| 422 |
+
|
| 423 |
+
---
|
| 424 |
+
|
| 425 |
+
## Missing Data Codes
|
| 426 |
+
|
| 427 |
+
### How Missing Data is Represented
|
| 428 |
+
|
| 429 |
+
| Field Type | Missing Data Representation |
|
| 430 |
+
|-----------|----------------------------|
|
| 431 |
+
| String fields (Sub_Modifier, Ambivalex, Notes) | Empty string `""` |
|
| 432 |
+
| Numeric fields | No missing data (all utterances fully annotated) |
|
| 433 |
+
| Segments | No missing data (all utterances have segment data) |
|
| 434 |
+
|
| 435 |
+
**Important**:
|
| 436 |
+
- There is **NO use of** `-999`, `NULL`, `NA`, or other special missing data codes
|
| 437 |
+
- Empty string `""` indicates "not applicable" for optional fields
|
| 438 |
+
- All tonality indices are complete (no missing values)
|
| 439 |
+
|
| 440 |
+
---
|
| 441 |
+
|
| 442 |
+
## Statistical Summary
|
| 443 |
+
|
| 444 |
+
### Dataset Overview
|
| 445 |
+
|
| 446 |
+
| Statistic | Value |
|
| 447 |
+
|-----------|-------|
|
| 448 |
+
| Total Utterances | 144 |
|
| 449 |
+
| Total Batches | 6 |
|
| 450 |
+
| Utterances per Batch | 18 |
|
| 451 |
+
| Single Speaker | Yes (Ronda) |
|
| 452 |
+
| Language | English (American) |
|
| 453 |
+
| Recording Period | Dec 19, 2025 - Jan 23, 2026 |
|
| 454 |
+
| Total Duration | ~10 minutes |
|
| 455 |
+
|
| 456 |
+
### Audio Specifications
|
| 457 |
+
|
| 458 |
+
| Specification | Value |
|
| 459 |
+
|---------------|-------|
|
| 460 |
+
| Sample Rate | 48,000 Hz |
|
| 461 |
+
| Bit Depth | 16-bit |
|
| 462 |
+
| Channels | Mono (1) |
|
| 463 |
+
| Format | WAV (uncompressed PCM) |
|
| 464 |
+
| Duration Range | 3-6 seconds per file |
|
| 465 |
+
|
| 466 |
+
### Index Distributions
|
| 467 |
+
|
| 468 |
+
*Note: Actual statistical summaries (mean, SD, min, max) should be calculated from the complete dataset.*
|
| 469 |
+
|
| 470 |
+
**Expected Patterns**:
|
| 471 |
+
- Cognitive_Energy_Index: Known systematic elevation (typically 90-100)
|
| 472 |
+
- Other indices: Expected to vary by Primary_Intention category
|
| 473 |
+
- See METHODOLOGY.md for quality control discussion
|
| 474 |
+
|
| 475 |
+
---
|
| 476 |
+
|
| 477 |
+
## Known Issues & Limitations
|
| 478 |
+
|
| 479 |
+
### Cognitive Energy Systematic Bias
|
| 480 |
+
|
| 481 |
+
**Issue**: Cognitive_Energy_Index shows systematic elevation across most utterances, regardless of Primary_Intention category.
|
| 482 |
+
|
| 483 |
+
**Possible Causes** (as noted in dataset documentation):
|
| 484 |
+
1. Speaker's ecological style (natural high-energy delivery)
|
| 485 |
+
2. Lexical content effects
|
| 486 |
+
3. Practitioner bias in scoring
|
| 487 |
+
|
| 488 |
+
**Resolution**: Intentionally retained for transparency and to reflect ecological reality of speech production. Researchers should account for this bias in analyses.
|
| 489 |
+
|
| 490 |
+
**Impact**:
|
| 491 |
+
- Trust and Empathy Resonance indices most affected
|
| 492 |
+
- Suggests need for speaker-specific normalization in some applications
|
| 493 |
+
- Does not invalidate other tonality measures
|
| 494 |
+
|
| 495 |
+
### Single-Speaker Limitation
|
| 496 |
+
|
| 497 |
+
- All 144 files from same speaker (Ronda)
|
| 498 |
+
- Findings may not generalize to other speakers
|
| 499 |
+
- Multi-speaker extension needed for broader applicability
|
| 500 |
+
|
| 501 |
+
### Controlled Environment
|
| 502 |
+
|
| 503 |
+
- Professional studio recordings
|
| 504 |
+
- May not reflect naturalistic speech conditions
|
| 505 |
+
- Scripted content (not spontaneous speech)
|
| 506 |
+
|
| 507 |
+
---
|
| 508 |
+
|
| 509 |
+
## Usage Notes
|
| 510 |
+
|
| 511 |
+
### Loading Data in Python
|
| 512 |
+
|
| 513 |
+
```python
|
| 514 |
+
import pandas as pd
|
| 515 |
+
import json
|
| 516 |
+
|
| 517 |
+
# Load combined CSV
|
| 518 |
+
df = pd.read_csv('ALL_TONALITY_DATA_COMBINED.csv')
|
| 519 |
+
|
| 520 |
+
# Parse Segments JSON
|
| 521 |
+
df['Segments_Parsed'] = df['Segments'].apply(json.loads)
|
| 522 |
+
|
| 523 |
+
# Access first segment's trust score
|
| 524 |
+
first_segment_trust = df['Segments_Parsed'].iloc[0][0]['trust']
|
| 525 |
+
```
|
| 526 |
+
|
| 527 |
+
### Loading Data in R
|
| 528 |
+
|
| 529 |
+
```r
|
| 530 |
+
library(readr)
|
| 531 |
+
library(jsonlite)
|
| 532 |
+
|
| 533 |
+
# Load CSV
|
| 534 |
+
data <- read_csv('ALL_TONALITY_DATA_COMBINED.csv')
|
| 535 |
+
|
| 536 |
+
# Parse Segments JSON
|
| 537 |
+
data$Segments_Parsed <- lapply(data$Segments, fromJSON)
|
| 538 |
+
|
| 539 |
+
# Access segment data
|
| 540 |
+
first_segment <- data$Segments_Parsed[[1]][[1]]
|
| 541 |
+
```
|
| 542 |
+
|
| 543 |
+
### Filtering by Intention
|
| 544 |
+
|
| 545 |
+
```python
|
| 546 |
+
# Get all Attention utterances
|
| 547 |
+
attention_data = df[df['Primary_Intention'] == 'Attention']
|
| 548 |
+
|
| 549 |
+
# Get all utterances with ambivalence
|
| 550 |
+
ambivalent_data = df[df['Ambivalex'] == 'ambivalex']
|
| 551 |
+
|
| 552 |
+
# Get Trust utterances with calm modifier
|
| 553 |
+
trust_calm = df[
|
| 554 |
+
(df['Primary_Intention'] == 'Trust') &
|
| 555 |
+
(df['Sub_Modifier'] == 'calm')
|
| 556 |
+
]
|
| 557 |
+
```
|
| 558 |
+
|
| 559 |
+
---
|
| 560 |
+
|
| 561 |
+
## Citation
|
| 562 |
+
|
| 563 |
+
When using this dataset, please cite:
|
| 564 |
+
|
| 565 |
+
```bibtex
|
| 566 |
+
@dataset{polhill_2026_tonalityprint,
|
| 567 |
+
author = {Polhill, Ronda},
|
| 568 |
+
title = {TonalityPrint: A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment v1.0},
|
| 569 |
+
year = 2026,
|
| 570 |
+
publisher = {Zenodo},
|
| 571 |
+
version = {1.0.0},
|
| 572 |
+
doi = {10.5281/zenodo.17913895},
|
| 573 |
+
url = {https://doi.org/10.5281/zenodo.17913895}
|
| 574 |
+
}
|
| 575 |
+
```
|
| 576 |
+
|
| 577 |
+
---
|
| 578 |
+
|
| 579 |
+
## Contact
|
| 580 |
+
|
| 581 |
+
**Dataset Curator**: Ronda Polhill
|
| 582 |
+
**Email**: ronda@TonalityPrint.com
|
| 583 |
+
**DOI**: https://doi.org/10.5281/zenodo.17913895
|
| 584 |
+
|
| 585 |
+
For questions about:
|
| 586 |
+
- Variable definitions → This codebook
|
| 587 |
+
- Annotation methodology → METHODOLOGY.md
|
| 588 |
+
- Dataset usage → DATACARD.md
|
| 589 |
+
- Technical issues → ronda@TonalityPrint.com
|
| 590 |
+
|
| 591 |
+
---
|
| 592 |
+
|
| 593 |
+
**Version**: 1.0.0
|
| 594 |
+
**Last Updated**: January 24, 2026
|
| 595 |
+
**License**: CC BY-NC 4.0
|
DOWNLOAD_DATA.md
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 📥 Download Dataset Files
|
| 2 |
+
|
| 3 |
+
## ⚠️ AUDIO AND ANNOTATION FILES NOT INCLUDED IN THIS REPOSITORY
|
| 4 |
+
|
| 5 |
+
This GitHub repository contains **documentation only**. The actual dataset files (audio and annotations) are hosted on Zenodo for permanent archival storage and official download tracking.
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## 🎯 Download Complete Dataset
|
| 10 |
+
|
| 11 |
+
### Official Download Source: Zenodo
|
| 12 |
+
**DOI:** https://doi.org/10.5281/zenodo.17913895
|
| 13 |
+
|
| 14 |
+
**Direct Download Link:** https://zenodo.org/records/17913895/files/DATACARD.zip
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## 📦 What You'll Get from Zenodo
|
| 19 |
+
|
| 20 |
+
### DATACARD.zip (42.9 MB) Contains:
|
| 21 |
+
|
| 22 |
+
**Audio Files:**
|
| 23 |
+
- 144 WAV files (48kHz, 16-bit, mono)
|
| 24 |
+
- ~11 minutes 5 seconds total duration
|
| 25 |
+
- Unprocessed, high-fidelity recordings
|
| 26 |
+
- Format: `TPV1_B1_UTT1_S_Att_SP-Ronda.wav`
|
| 27 |
+
|
| 28 |
+
**Annotation Files:**
|
| 29 |
+
- 144 JSON files (complete metadata)
|
| 30 |
+
- 144 CSV files (23 columns each)
|
| 31 |
+
- 1 combined CSV (ALL_TONALITY_DATA_COMBINED.csv)
|
| 32 |
+
- Total: 289 annotation files
|
| 33 |
+
|
| 34 |
+
**Documentation:**
|
| 35 |
+
- All files in this GitHub repository
|
| 36 |
+
- Plus additional technical specifications
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
+
|
| 40 |
+
## 🚀 Quick Download Steps
|
| 41 |
+
|
| 42 |
+
### Step 1: Visit Zenodo
|
| 43 |
+
Go to: https://doi.org/10.5281/zenodo.17913895
|
| 44 |
+
|
| 45 |
+
### Step 2: Download DATACARD.zip
|
| 46 |
+
Click "Download" on the DATACARD.zip file (42.9 MB)
|
| 47 |
+
|
| 48 |
+
### Step 3: Extract Files
|
| 49 |
+
Unzip DATACARD.zip to access:
|
| 50 |
+
```
|
| 51 |
+
DATACARD/
|
| 52 |
+
├── audio/ # 144 WAV files
|
| 53 |
+
├── annotations/
|
| 54 |
+
│ ├── json/ # 144 JSON files
|
| 55 |
+
│ ├── csv/ # 144 CSV files
|
| 56 |
+
│ └── ALL_TONALITY_DATA_COMBINED.csv
|
| 57 |
+
└── documentation/ # Technical docs
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
### Step 4: Start Using
|
| 61 |
+
Load the combined CSV or individual files:
|
| 62 |
+
```python
|
| 63 |
+
import pandas as pd
|
| 64 |
+
df = pd.read_csv('annotations/ALL_TONALITY_DATA_COMBINED.csv')
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
---
|
| 68 |
+
|
| 69 |
+
## 📊 Why Zenodo?
|
| 70 |
+
|
| 71 |
+
**Official Download Tracking:**
|
| 72 |
+
- Zenodo provides official download statistics
|
| 73 |
+
- Geographic distribution tracking
|
| 74 |
+
- Citation metrics via DOI
|
| 75 |
+
- Permanent archival storage
|
| 76 |
+
|
| 77 |
+
**You can trust Zenodo metrics in:**
|
| 78 |
+
- Grant applications
|
| 79 |
+
- Academic impact reports
|
| 80 |
+
- Funding justification
|
| 81 |
+
- Research publications
|
| 82 |
+
|
| 83 |
+
**Current Statistics:**
|
| 84 |
+
Visit the Zenodo page to see real-time download counts and usage metrics.
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
## 📚 What's In This GitHub Repository
|
| 89 |
+
|
| 90 |
+
This repository contains comprehensive documentation:
|
| 91 |
+
|
| 92 |
+
**Root Documentation:**
|
| 93 |
+
- README.md - Dataset overview
|
| 94 |
+
- DATASET_CARD.md - ML dataset card
|
| 95 |
+
- CITATION.cff - Machine-readable citation
|
| 96 |
+
- LICENSE - CC BY-NC 4.0 full text
|
| 97 |
+
- ETHICAL_USE_AND_LIMITATIONS.md - Ethical guidelines
|
| 98 |
+
- CHANGELOG.md - Version history
|
| 99 |
+
- QUICK_START.txt - Quick start guide
|
| 100 |
+
- REPOSITORY_GUIDE.md - Deployment guide
|
| 101 |
+
|
| 102 |
+
**Technical Documentation (documentation/ folder):**
|
| 103 |
+
- CODEBOOK.md - Variable definitions
|
| 104 |
+
- METHODOLOGY.md - Data collection procedures
|
| 105 |
+
- MANIFEST.txt - File inventory
|
| 106 |
+
- annotations.txt - Annotation guidelines
|
| 107 |
+
- continuous_indices.txt - Rating scales
|
| 108 |
+
- scripts.txt - Utterance scripts
|
| 109 |
+
- speaker_profile.txt - Speaker information
|
| 110 |
+
- tech_specs.txt - Technical specifications
|
| 111 |
+
- transcripts.txt - Transcriptions
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## 🔗 Links
|
| 116 |
+
|
| 117 |
+
**Dataset Download (Zenodo):** https://doi.org/10.5281/zenodo.17913895
|
| 118 |
+
**Documentation (GitHub):** https://github.com/YOUR_USERNAME/TonalityPrint-v1
|
| 119 |
+
**White Paper:** https://doi.org/10.5281/zenodo.17410581
|
| 120 |
+
**Website:** https://TonalityPrint.com
|
| 121 |
+
**Contact:** ronda@TonalityPrint.com
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
## ⚖️ License
|
| 126 |
+
|
| 127 |
+
**CC BY-NC 4.0** (Creative Commons Attribution-NonCommercial 4.0 International)
|
| 128 |
+
|
| 129 |
+
- ✅ Academic and research use: FREE
|
| 130 |
+
- ✅ Proper attribution required
|
| 131 |
+
- ❌ Commercial use: Requires licensing
|
| 132 |
+
|
| 133 |
+
**Commercial licensing:** Contact ronda@TonalityPrint.com
|
| 134 |
+
|
| 135 |
+
---
|
| 136 |
+
|
| 137 |
+
## 📖 Citation
|
| 138 |
+
|
| 139 |
+
When using this dataset, please cite:
|
| 140 |
+
|
| 141 |
+
```bibtex
|
| 142 |
+
@dataset{polhill_2026_tonalityprint,
|
| 143 |
+
author = {Polhill, Ronda},
|
| 144 |
+
title = {TonalityPrint: A Contrast-Structured Voice Dataset
|
| 145 |
+
for Exploring Functional Tonal Intent, Ambivalence,
|
| 146 |
+
and Inference-Time Prosodic Alignment v1.0},
|
| 147 |
+
year = 2026,
|
| 148 |
+
publisher = {Zenodo},
|
| 149 |
+
version = {1.0.0},
|
| 150 |
+
doi = {10.5281/zenodo.17913895},
|
| 151 |
+
url = {https://doi.org/10.5281/zenodo.17913895}
|
| 152 |
+
}
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
**Need Help?**
|
| 158 |
+
- 📧 Email: ronda@TonalityPrint.com
|
| 159 |
+
- 🌐 Website: https://TonalityPrint.com
|
| 160 |
+
- 📚 Full Documentation: See files in this repository
|
| 161 |
+
|
| 162 |
+
---
|
| 163 |
+
|
| 164 |
+
**Version:** 1.0.0
|
| 165 |
+
**Last Updated:** January 30, 2026
|
| 166 |
+
**License:** CC BY-NC 4.0
|
LICENSE
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Creative Commons Attribution-NonCommercial 4.0 International Public License
|
| 2 |
+
=======================================================================================
|
| 3 |
+
|
| 4 |
+
By exercising the Licensed Rights (defined below), You accept and agree to be bound
|
| 5 |
+
by the terms and conditions of this Creative Commons Attribution-NonCommercial 4.0
|
| 6 |
+
International Public License ("Public License"). To the extent this Public License
|
| 7 |
+
may be interpreted as a contract, You are granted the Licensed Rights in consideration
|
| 8 |
+
of Your acceptance of these terms and conditions, and the Licensor grants You such
|
| 9 |
+
rights in consideration of benefits the Licensor receives from making the Licensed
|
| 10 |
+
Material available under these terms and conditions.
|
| 11 |
+
|
| 12 |
+
Section 1 – Definitions.
|
| 13 |
+
|
| 14 |
+
a. Adapted Material means material subject to Copyright and Similar Rights that is
|
| 15 |
+
derived from or based upon the Licensed Material and in which the Licensed Material
|
| 16 |
+
is translated, altered, arranged, transformed, or otherwise modified in a manner
|
| 17 |
+
requiring permission under the Copyright and Similar Rights held by the Licensor.
|
| 18 |
+
|
| 19 |
+
b. Adapter's License means the license You apply to Your Copyright and Similar Rights
|
| 20 |
+
in Your contributions to Adapted Material in accordance with the terms and conditions
|
| 21 |
+
of this Public License.
|
| 22 |
+
|
| 23 |
+
c. Copyright and Similar Rights means copyright and/or similar rights closely related
|
| 24 |
+
to copyright including, without limitation, performance, broadcast, sound recording,
|
| 25 |
+
and Sui Generis Database Rights, without regard to how the rights are labeled or
|
| 26 |
+
categorized.
|
| 27 |
+
|
| 28 |
+
d. Licensed Material means the artistic or literary work, database, or other material
|
| 29 |
+
to which the Licensor applied this Public License.
|
| 30 |
+
|
| 31 |
+
e. Licensed Rights means the rights granted to You subject to the terms and conditions
|
| 32 |
+
of this Public License, which are limited to all Copyright and Similar Rights that
|
| 33 |
+
apply to Your use of the Licensed Material and that the Licensor has authority to
|
| 34 |
+
license.
|
| 35 |
+
|
| 36 |
+
f. Licensor means Ronda Polhill, the individual(s) or entity(ies) granting rights under
|
| 37 |
+
this Public License.
|
| 38 |
+
|
| 39 |
+
g. NonCommercial means not primarily intended for or directed towards commercial advantage
|
| 40 |
+
or monetary compensation.
|
| 41 |
+
|
| 42 |
+
h. Share means to provide material to the public by any means or process that requires
|
| 43 |
+
permission under the Licensed Rights, such as reproduction, public display, public
|
| 44 |
+
performance, distribution, dissemination, communication, or importation, and to make
|
| 45 |
+
material available to the public including in ways that members of the public may
|
| 46 |
+
access the material from a place and at a time individually chosen by them.
|
| 47 |
+
|
| 48 |
+
i. You means the individual or entity exercising the Licensed Rights under this Public
|
| 49 |
+
License. Your has a corresponding meaning.
|
| 50 |
+
|
| 51 |
+
Section 2 – Scope.
|
| 52 |
+
|
| 53 |
+
a. License grant.
|
| 54 |
+
1. Subject to the terms and conditions of this Public License, the Licensor hereby
|
| 55 |
+
grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable
|
| 56 |
+
license to exercise the Licensed Rights in the Licensed Material to:
|
| 57 |
+
A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial
|
| 58 |
+
purposes only; and
|
| 59 |
+
B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
|
| 60 |
+
|
| 61 |
+
2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and
|
| 62 |
+
Limitations apply to Your use, this Public License does not apply, and You do not
|
| 63 |
+
need to comply with its terms and conditions.
|
| 64 |
+
|
| 65 |
+
3. Term. The term of this Public License is specified in Section 6(a).
|
| 66 |
+
|
| 67 |
+
4. Media and formats; technical modifications allowed. The Licensor authorizes You
|
| 68 |
+
to exercise the Licensed Rights in all media and formats whether now known or
|
| 69 |
+
hereafter created, and to make technical modifications necessary to do so.
|
| 70 |
+
|
| 71 |
+
5. Downstream recipients.
|
| 72 |
+
A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed
|
| 73 |
+
Material automatically receives an offer from the Licensor to exercise the
|
| 74 |
+
Licensed Rights under the terms and conditions of this Public License.
|
| 75 |
+
B. No downstream restrictions. You may not offer or impose any additional or
|
| 76 |
+
different terms or conditions on, or apply any Effective Technological Measures
|
| 77 |
+
to, the Licensed Material if doing so restricts exercise of the Licensed Rights
|
| 78 |
+
by any recipient of the Licensed Material.
|
| 79 |
+
|
| 80 |
+
6. No endorsement. Nothing in this Public License constitutes or may be construed
|
| 81 |
+
as permission to assert or imply that You are, or that Your use of the Licensed
|
| 82 |
+
Material is, connected with, or sponsored, endorsed, or granted official status
|
| 83 |
+
by, the Licensor or others designated to receive attribution as provided in
|
| 84 |
+
Section 3(a)(1)(A)(i).
|
| 85 |
+
|
| 86 |
+
b. Other rights.
|
| 87 |
+
1. Moral rights, such as the right of integrity, are not licensed under this Public
|
| 88 |
+
License, nor are publicity, privacy, and/or other similar personality rights.
|
| 89 |
+
|
| 90 |
+
2. The Licensor waives and/or agrees not to assert any such rights held by the
|
| 91 |
+
Licensor to the limited extent necessary to allow You to exercise the Licensed
|
| 92 |
+
Rights, but not otherwise.
|
| 93 |
+
|
| 94 |
+
Section 3 – License Conditions.
|
| 95 |
+
|
| 96 |
+
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
|
| 97 |
+
|
| 98 |
+
a. Attribution.
|
| 99 |
+
1. If You Share the Licensed Material (including in modified form), You must:
|
| 100 |
+
A. retain the following if it is supplied by the Licensor with the Licensed Material:
|
| 101 |
+
i. identification of the creator(s) of the Licensed Material and any others
|
| 102 |
+
designated to receive attribution, in any reasonable manner requested by
|
| 103 |
+
the Licensor (including by pseudonym if designated);
|
| 104 |
+
ii. a copyright notice;
|
| 105 |
+
iii. a notice that refers to this Public License;
|
| 106 |
+
iv. a notice that refers to the disclaimer of warranties;
|
| 107 |
+
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
|
| 108 |
+
|
| 109 |
+
B. indicate if You modified the Licensed Material and retain an indication of any
|
| 110 |
+
previous modifications; and
|
| 111 |
+
C. indicate the Licensed Material is licensed under this Public License, and include
|
| 112 |
+
the text of, or the URI or hyperlink to, this Public License.
|
| 113 |
+
|
| 114 |
+
2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based
|
| 115 |
+
on the medium, means, and context in which You Share the Licensed Material.
|
| 116 |
+
|
| 117 |
+
3. If requested by the Licensor, You must remove any of the information required by
|
| 118 |
+
Section 3(a)(1)(A) to the extent reasonably practicable.
|
| 119 |
+
|
| 120 |
+
4. If You Share Adapted Material You produce, the Adapter's License You apply must
|
| 121 |
+
not prevent recipients of the Adapted Material from complying with this Public
|
| 122 |
+
License.
|
| 123 |
+
|
| 124 |
+
Section 4 – Sui Generis Database Rights.
|
| 125 |
+
|
| 126 |
+
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use
|
| 127 |
+
of the Licensed Material:
|
| 128 |
+
|
| 129 |
+
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse,
|
| 130 |
+
reproduce, and Share all or a substantial portion of the contents of the database
|
| 131 |
+
for NonCommercial purposes only;
|
| 132 |
+
|
| 133 |
+
b. if You include all or a substantial portion of the database contents in a database
|
| 134 |
+
in which You have Sui Generis Database Rights, then the database in which You have
|
| 135 |
+
Sui Generis Database Rights (but not its individual contents) is Adapted Material;
|
| 136 |
+
and
|
| 137 |
+
|
| 138 |
+
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial
|
| 139 |
+
portion of the contents of the database.
|
| 140 |
+
|
| 141 |
+
Section 5 – Disclaimer of Warranties and Limitation of Liability.
|
| 142 |
+
|
| 143 |
+
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE
|
| 144 |
+
LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO
|
| 145 |
+
REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER
|
| 146 |
+
EXPRESS, IMPLIED, STATUTORY, OR OTHER.
|
| 147 |
+
|
| 148 |
+
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL
|
| 149 |
+
THEORY FOR ANY SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY,
|
| 150 |
+
OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
|
| 151 |
+
USE OF THE LICENSED MATERIAL.
|
| 152 |
+
|
| 153 |
+
c. The disclaimer of warranties and limitation of liability provided above shall be
|
| 154 |
+
interpreted in a manner that, to the extent possible, most closely approximates an
|
| 155 |
+
absolute disclaimer and waiver of all liability.
|
| 156 |
+
|
| 157 |
+
Section 6 – Term and Termination.
|
| 158 |
+
|
| 159 |
+
a. This Public License applies for the term of the Copyright and Similar Rights licensed
|
| 160 |
+
here. However, if You fail to comply with this Public License, then Your rights under
|
| 161 |
+
this Public License terminate automatically.
|
| 162 |
+
|
| 163 |
+
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it
|
| 164 |
+
reinstates:
|
| 165 |
+
1. automatically as of the date the violation is cured, provided it is cured within
|
| 166 |
+
30 days of Your discovery of the violation; or
|
| 167 |
+
2. upon express reinstatement by the Licensor.
|
| 168 |
+
|
| 169 |
+
c. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor
|
| 170 |
+
may have to seek remedies for Your violations of this Public License.
|
| 171 |
+
|
| 172 |
+
d. For the avoidance of doubt, the Licensor may also offer the Licensed Material under
|
| 173 |
+
separate terms or conditions or stop distributing the Licensed Material at any time;
|
| 174 |
+
however, doing so will not terminate this Public License.
|
| 175 |
+
|
| 176 |
+
Section 7 – Other Terms and Conditions.
|
| 177 |
+
|
| 178 |
+
a. The Licensor shall not be bound by any additional or different terms or conditions
|
| 179 |
+
communicated by You unless expressly agreed.
|
| 180 |
+
|
| 181 |
+
b. Any arrangements, understandings, or agreements regarding the Licensed Material not
|
| 182 |
+
stated herein are separate from and independent of the terms and conditions of this
|
| 183 |
+
Public License.
|
| 184 |
+
|
| 185 |
+
Section 8 – Interpretation.
|
| 186 |
+
|
| 187 |
+
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted
|
| 188 |
+
to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material
|
| 189 |
+
that could lawfully be made without permission under this Public License.
|
| 190 |
+
|
| 191 |
+
b. To the extent possible, if any provision of this Public License is deemed unenforceable,
|
| 192 |
+
it shall be automatically reformed to the minimum extent necessary to make it
|
| 193 |
+
enforceable. If the provision cannot be reformed, it shall be severed from this Public
|
| 194 |
+
License without affecting the enforceability of the remaining terms and conditions.
|
| 195 |
+
|
| 196 |
+
c. No term or condition of this Public License will be waived and no failure to comply
|
| 197 |
+
consented to unless expressly agreed to by the Licensor.
|
| 198 |
+
|
| 199 |
+
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon,
|
| 200 |
+
or waiver of, any privileges and immunities that apply to the Licensor or You,
|
| 201 |
+
including from the legal processes of any jurisdiction or authority.
|
| 202 |
+
|
| 203 |
+
=========================================================================================
|
| 204 |
+
|
| 205 |
+
DATASET-SPECIFIC TERMS
|
| 206 |
+
|
| 207 |
+
Dataset: TonalityPrint Voice Dataset v1.0
|
| 208 |
+
DOI: https://doi.org/10.5281/zenodo.17913895
|
| 209 |
+
Licensor: Ronda Polhill
|
| 210 |
+
Contact: ronda@TonalityPrint.com
|
| 211 |
+
|
| 212 |
+
COMMERCIAL LICENSING:
|
| 213 |
+
For commercial use of this dataset, please contact ronda@TonalityPrint.com.
|
| 214 |
+
|
| 215 |
+
PROHIBITED USES:
|
| 216 |
+
Regardless of license terms, You are strictly prohibited from:
|
| 217 |
+
- Creating unauthorized voice clones or deepfakes of the speaker (Ronda Polhill)
|
| 218 |
+
- Using the dataset for deceptive purposes
|
| 219 |
+
- Using the dataset to train voice synthesis models for impersonation
|
| 220 |
+
- Any use that violates the speaker's biometric privacy or consent
|
| 221 |
+
|
| 222 |
+
ATTRIBUTION REQUIREMENT:
|
| 223 |
+
When using this dataset in research, please cite:
|
| 224 |
+
|
| 225 |
+
Polhill, R. (2026). TonalityPrint: A Contrast-Structured Voice Dataset for
|
| 226 |
+
Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic
|
| 227 |
+
Alignment v1.0 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.17913895
|
| 228 |
+
|
| 229 |
+
=========================================================================================
|
| 230 |
+
|
| 231 |
+
Creative Commons is not a party to its public licenses. Notwithstanding, Creative
|
| 232 |
+
Commons may elect to apply one of its public licenses to material it publishes and
|
| 233 |
+
in those instances will be considered the "Licensor." The text of the Creative Commons
|
| 234 |
+
public licenses is dedicated to the public domain under the CC0 Public Domain
|
| 235 |
+
Dedication. Except for the limited purpose of indicating that material is shared under
|
| 236 |
+
a Creative Commons public license or as otherwise permitted by the Creative Commons
|
| 237 |
+
policies published at creativecommons.org/policies, Creative Commons does not authorize
|
| 238 |
+
the use of the trademark "Creative Commons" or any other trademark or logo of Creative
|
| 239 |
+
Commons without its prior written consent including, without limitation, in connection
|
| 240 |
+
with any unauthorized modifications to any of its public licenses or any other
|
| 241 |
+
arrangements, understandings, or agreements concerning use of licensed material. For
|
| 242 |
+
the avoidance of doubt, this paragraph does not form part of the public licenses.
|
| 243 |
+
|
| 244 |
+
Creative Commons may be contacted at creativecommons.org.
|
MANIFEST.txt
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MANIFEST - TonalityPrint Voice Dataset v1.0
|
| 2 |
+
===========================================
|
| 3 |
+
|
| 4 |
+
This manifest provides a complete inventory of all files included in the TonalityPrint Voice Dataset v1.0.
|
| 5 |
+
|
| 6 |
+
DATASET OVERVIEW
|
| 7 |
+
----------------
|
| 8 |
+
Version: 1.0.0
|
| 9 |
+
Release Date: January 24, 2026
|
| 10 |
+
DOI: https://doi.org/10.5281/zenodo.17913895
|
| 11 |
+
License: CC BY-NC 4.0
|
| 12 |
+
|
| 13 |
+
Total Audio Files: 144 WAV files
|
| 14 |
+
Total JSON Files: 144 individual JSON annotations
|
| 15 |
+
Total CSV Files: 144 individual CSVs + 1 combined CSV (ALL_TONALITY_DATA_COMBINED.csv)
|
| 16 |
+
Documentation Files: 13 files (4 root + 9 in documentation folder)
|
| 17 |
+
Total Dataset Files: 446 files (144 audio + 144 JSON + 145 CSV/combined + 13 documentation)
|
| 18 |
+
|
| 19 |
+
Recording Format: 16-bit PCM WAV (uncompressed)
|
| 20 |
+
Recording Source: 48kHz, 32-bit float (Audacity) → Exported as 16-bit PCM
|
| 21 |
+
Sample Rate: 48,000 Hz (48kHz)
|
| 22 |
+
Bit Depth: 16-bit
|
| 23 |
+
Channels: Mono (1 channel)
|
| 24 |
+
Speaker: Single speaker (Ronda Polhill)
|
| 25 |
+
Language: English (American)
|
| 26 |
+
Duration Range: 3-6 seconds per utterance
|
| 27 |
+
Total Duration: ~11 minutes 5 seconds
|
| 28 |
+
|
| 29 |
+
Annotation Method: Expert practitioner (perceptual assessment)
|
| 30 |
+
Annotation Completeness: 100% (all files fully annotated)
|
| 31 |
+
Quality Control: ~18.05% of corpus re-recorded after proprietary heuristic audit
|
| 32 |
+
|
| 33 |
+
FILE STRUCTURE
|
| 34 |
+
--------------
|
| 35 |
+
TonalityPrint_v1/
|
| 36 |
+
│
|
| 37 |
+
├── README.md [ML Dataset Card - Primary documentation]
|
| 38 |
+
├── QUICK_START.txt [4-step quick start guide]
|
| 39 |
+
├── LICENSE.txt [CC BY-NC 4.0 License - Full legal text]
|
| 40 |
+
├── CITATION.cff [Machine-readable citation metadata]
|
| 41 |
+
│
|
| 42 |
+
├── documentation/ [Technical reference documentation]
|
| 43 |
+
│ ├── CODEBOOK.md [Variable definitions - All 23 CSV columns]
|
| 44 |
+
│ ├── METHODOLOGY.md [Data collection & annotation procedures]
|
| 45 |
+
│ ├── MANIFEST.txt [This file - Complete file inventory]
|
| 46 |
+
│ ├── annotations.txt [Annotation guidelines and documentation]
|
| 47 |
+
│ ├── continuous_indices.txt [Continuous intensity rating guidelines]
|
| 48 |
+
│ ├── scripts.txt [Script documentation]
|
| 49 |
+
│ ├── speaker_profile.txt [Speaker information and characteristics]
|
| 50 |
+
│ ├── tech_specs.txt [Technical specifications]
|
| 51 |
+
│ └── transcripts.txt [Transcript documentation]
|
| 52 |
+
│
|
| 53 |
+
├── audio/ [Audio recordings - 144 files]
|
| 54 |
+
│ ├── TPV1_B1_UTT1_S_Att_SP-Ronda.wav
|
| 55 |
+
│ ├── TPV1_B1_UTT1_S_Baseneutral_SP-Ronda.wav
|
| 56 |
+
│ ├── TPV1_B1_UTT1_S_Cogen_SP-Ronda.wav
|
| 57 |
+
│ ├── ... [141 more WAV files]
|
| 58 |
+
│ └── TPV1_B6_UTT18_S_Trus_SP-Ronda.wav
|
| 59 |
+
│
|
| 60 |
+
└── annotations/ [Annotation data - 289 files total]
|
| 61 |
+
├── json/ [Original JSON annotations - 144 files]
|
| 62 |
+
│ ├── TPV1_B1_UTT1_S_Att_SP-Ronda.json
|
| 63 |
+
│ ├── ... [143 more JSON files]
|
| 64 |
+
│ └── TPV1_B6_UTT18_S_Trus_SP-Ronda.json
|
| 65 |
+
│
|
| 66 |
+
├── csv/ [CSV format annotations - 144 files]
|
| 67 |
+
│ ├── TPV1_B1_UTT1_S_Att_SP-Ronda.csv
|
| 68 |
+
│ ├── ... [143 more CSV files]
|
| 69 |
+
│ └── TPV1_B6_UTT18_S_Trus_SP-Ronda.csv
|
| 70 |
+
│
|
| 71 |
+
└── ALL_TONALITY_DATA_COMBINED.csv [Combined dataset - All 144 rows in single file]
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
AUDIO FILES INVENTORY (144 total)
|
| 75 |
+
----------------------------------
|
| 76 |
+
|
| 77 |
+
Batch 1 (B1) - Utterances 1-3:
|
| 78 |
+
- TPV1_B1_UTT1_S_Att_SP-Ronda.wav
|
| 79 |
+
- TPV1_B1_UTT1_S_Baseneutral_SP-Ronda.wav
|
| 80 |
+
- TPV1_B1_UTT1_S_Cogen_SP-Ronda.wav
|
| 81 |
+
- TPV1_B1_UTT1_S_Emre_SP-Ronda.wav
|
| 82 |
+
- TPV1_B1_UTT1_S_Reci_affi_ambivalex_SP-Ronda.wav
|
| 83 |
+
- TPV1_B1_UTT1_S_Reci_affi_SP-Ronda.wav
|
| 84 |
+
- TPV1_B1_UTT1_S_Reci_SP-Ronda.wav
|
| 85 |
+
- TPV1_B1_UTT1_S_Trus_SP-Ronda.wav
|
| 86 |
+
- TPV1_B1_UTT2_S_Att_SP-Ronda.wav
|
| 87 |
+
- TPV1_B1_UTT2_S_Baseneutral_SP-Ronda.wav
|
| 88 |
+
- TPV1_B1_UTT2_S_Cogen_SP-Ronda.wav
|
| 89 |
+
- TPV1_B1_UTT2_S_Emre_SP-Ronda.wav
|
| 90 |
+
- TPV1_B1_UTT2_S_Reci_colla_ambivalex_SP-Ronda.wav
|
| 91 |
+
- TPV1_B1_UTT2_S_Reci_colla_SP-Ronda.wav
|
| 92 |
+
- TPV1_B1_UTT2_S_Reci_SP-Ronda.wav
|
| 93 |
+
- TPV1_B1_UTT2_S_Trus_SP-Ronda.wav
|
| 94 |
+
- TPV1_B1_UTT3_S_Att_SP-Ronda.wav
|
| 95 |
+
- TPV1_B1_UTT3_S_Baseneutral_SP-Ronda.wav
|
| 96 |
+
- TPV1_B1_UTT3_S_Cogen_SP-Ronda.wav
|
| 97 |
+
- TPV1_B1_UTT3_S_Emre_SP-Ronda.wav
|
| 98 |
+
- TPV1_B1_UTT3_S_Reci_SP-Ronda.wav
|
| 99 |
+
- TPV1_B1_UTT3_S_Trus_calm_ambivalex_SP-Ronda.wav
|
| 100 |
+
- TPV1_B1_UTT3_S_Trus_calm_SP-Ronda.wav
|
| 101 |
+
- TPV1_B1_UTT3_S_Trus_SP-Ronda.wav
|
| 102 |
+
|
| 103 |
+
[Batches 2-6 follow same pattern with 24 files each - complete list available in dataset]
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
ANNOTATION FILES SUMMARY
|
| 107 |
+
-------------------------
|
| 108 |
+
|
| 109 |
+
JSON Files (144):
|
| 110 |
+
- One JSON file per audio file
|
| 111 |
+
- Contains complete annotation metadata
|
| 112 |
+
- Segment-level temporal data
|
| 113 |
+
- Five tonality indices (0-100 scale)
|
| 114 |
+
|
| 115 |
+
CSV Files (144):
|
| 116 |
+
- One CSV file per audio file
|
| 117 |
+
- 23 columns of annotation data
|
| 118 |
+
- Flat format for easy analysis
|
| 119 |
+
- Same data as JSON in tabular format
|
| 120 |
+
|
| 121 |
+
Combined CSV (1):
|
| 122 |
+
- ALL_TONALITY_DATA_COMBINED.csv
|
| 123 |
+
- All 144 annotations in single file
|
| 124 |
+
- Header row + 144 data rows
|
| 125 |
+
- Complete dataset for bulk analysis
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
DOCUMENTATION FILES
|
| 129 |
+
-------------------
|
| 130 |
+
|
| 131 |
+
Root Level (4 files):
|
| 132 |
+
|
| 133 |
+
1. README.md (27K)
|
| 134 |
+
- ML Dataset Card format
|
| 135 |
+
- Dataset overview and structure
|
| 136 |
+
- Supported tasks and use cases
|
| 137 |
+
- Known biases and limitations
|
| 138 |
+
- Citation information
|
| 139 |
+
|
| 140 |
+
2. QUICK_START.txt (9K)
|
| 141 |
+
- 4-step quick start guide
|
| 142 |
+
- Common tasks reference
|
| 143 |
+
- Dataset quick facts
|
| 144 |
+
- Citation examples
|
| 145 |
+
|
| 146 |
+
3. LICENSE.txt (430 bytes)
|
| 147 |
+
- CC BY-NC 4.0 legal text
|
| 148 |
+
- Non-commercial use permissions
|
| 149 |
+
- Commercial licensing contact
|
| 150 |
+
|
| 151 |
+
4. CITATION.cff (3.7K)
|
| 152 |
+
- Machine-readable citation metadata
|
| 153 |
+
- BibTeX-compatible format
|
| 154 |
+
- CFF v1.2.0 standard
|
| 155 |
+
|
| 156 |
+
Documentation Folder (9 files):
|
| 157 |
+
|
| 158 |
+
5. CODEBOOK.md (20K)
|
| 159 |
+
- All 23 CSV column definitions
|
| 160 |
+
- File naming conventions
|
| 161 |
+
- Sub-modifier definitions
|
| 162 |
+
- Tonality indices descriptions
|
| 163 |
+
- Usage examples
|
| 164 |
+
|
| 165 |
+
6. METHODOLOGY.md (31K)
|
| 166 |
+
- Theoretical framework
|
| 167 |
+
- Recording environment
|
| 168 |
+
- Annotation procedures
|
| 169 |
+
- Quality control process
|
| 170 |
+
- Known biases
|
| 171 |
+
|
| 172 |
+
7. MANIFEST.txt (This file, 16K)
|
| 173 |
+
- Complete file inventory
|
| 174 |
+
- Directory structure
|
| 175 |
+
- File descriptions
|
| 176 |
+
- Version information
|
| 177 |
+
|
| 178 |
+
8. annotations.txt (4K)
|
| 179 |
+
- Annotation guidelines
|
| 180 |
+
- Annotation procedures
|
| 181 |
+
|
| 182 |
+
9. continuous_indices.txt (767 bytes)
|
| 183 |
+
- Continuous intensity rating guidelines
|
| 184 |
+
- Scale definitions (0-100)
|
| 185 |
+
- Intent abbreviations
|
| 186 |
+
|
| 187 |
+
10. scripts.txt (1.4K)
|
| 188 |
+
- Script documentation
|
| 189 |
+
|
| 190 |
+
11. speaker_profile.txt (1.3K)
|
| 191 |
+
- Speaker characteristics
|
| 192 |
+
- Background information
|
| 193 |
+
|
| 194 |
+
12. tech_specs.txt (1.2K)
|
| 195 |
+
- Technical specifications
|
| 196 |
+
- Recording equipment details
|
| 197 |
+
|
| 198 |
+
13. transcripts.txt (1.4K)
|
| 199 |
+
- Transcript documentation
|
| 200 |
+
|
| 201 |
+
|
| 202 |
+
FILE CHECKSUMS
|
| 203 |
+
--------------
|
| 204 |
+
Note: File integrity checksums (MD5 and SHA256) are automatically generated by Zenodo
|
| 205 |
+
and can be viewed on the dataset's Zenodo record page.
|
| 206 |
+
|
| 207 |
+
For local verification of downloaded files, users can generate checksums using:
|
| 208 |
+
- Linux/Mac: `md5sum *` or `shasum -a 256 *`
|
| 209 |
+
- Windows: `certutil -hashfile <filename> MD5` or `certutil -hashfile <filename> SHA256`
|
| 210 |
+
|
| 211 |
+
Zenodo provides file-level checksums for all files in the dataset, ensuring data integrity
|
| 212 |
+
and enabling verification of downloads.
|
| 213 |
+
|
| 214 |
+
|
| 215 |
+
VERSION INFORMATION
|
| 216 |
+
-------------------
|
| 217 |
+
Dataset Version: 1.0.0
|
| 218 |
+
Release Date: January 23, 2026
|
| 219 |
+
Last Updated: January 24, 2026
|
| 220 |
+
DOI: https://doi.org/10.5281/zenodo.17913895
|
| 221 |
+
Zenodo Record: https://zenodo.org/record/17913895
|
| 222 |
+
License: CC BY-NC 4.0 International
|
| 223 |
+
|
| 224 |
+
Version History:
|
| 225 |
+
- v1.0.0 (January 23, 2026): Initial public release
|
| 226 |
+
- 144 audio files (WAV format)
|
| 227 |
+
- 144 JSON annotations
|
| 228 |
+
- 144 CSV annotations + 1 combined CSV
|
| 229 |
+
- 13 documentation files
|
| 230 |
+
- Complete quality control audit (~18.05% re-recorded)
|
| 231 |
+
|
| 232 |
+
|
| 233 |
+
DATASET SUMMARY
|
| 234 |
+
---------------
|
| 235 |
+
|
| 236 |
+
Total Package Contents:
|
| 237 |
+
- Audio Files: 144 WAV files (~11 min 5 sec total)
|
| 238 |
+
- Annotation Files: 289 files (144 JSON + 144 CSV + 1 combined CSV)
|
| 239 |
+
- Documentation: 13 files
|
| 240 |
+
- Total: 446 files
|
| 241 |
+
|
| 242 |
+
Key Features:
|
| 243 |
+
1. Single speaker (Ronda Polhill) - eliminates speaker variability
|
| 244 |
+
2. Five tonality indices (0-100 continuous scale)
|
| 245 |
+
3. Six primary tonal intents
|
| 246 |
+
4. 24 optional sub-modifiers
|
| 247 |
+
5. Ambivalence marker for complex tonality
|
| 248 |
+
6. Expert practitioner annotations (speaker = annotator)
|
| 249 |
+
7. Quality controlled (~18.05% re-recorded for consistency)
|
| 250 |
+
|
| 251 |
+
Quality Assurance:
|
| 252 |
+
- Proprietary heuristic audit: ~80+% acoustic-intent alignment
|
| 253 |
+
- Re-recording: ~18.05% of corpus for improved consistency
|
| 254 |
+
- Known bias documented: Cognitive Energy systematic elevation
|
| 255 |
+
- Completeness: 100% of files fully annotated
|
| 256 |
+
|
| 257 |
+
License & Usage:
|
| 258 |
+
- License: CC BY-NC 4.0 (Non-commercial use)
|
| 259 |
+
- Commercial licensing: Contact ronda@TonalityPrint.com
|
| 260 |
+
- DOI: https://doi.org/10.5281/zenodo.17913895
|
| 261 |
+
|
| 262 |
+
|
| 263 |
+
CONTACT INFORMATION
|
| 264 |
+
-------------------
|
| 265 |
+
|
| 266 |
+
Dataset Curator: Ronda Polhill
|
| 267 |
+
Email: ronda@TonalityPrint.com
|
| 268 |
+
Zenodo Record: https://zenodo.org/record/17913895
|
| 269 |
+
License: CC BY-NC 4.0
|
| 270 |
+
Commercial Licensing: ronda@TonalityPrint.com
|
| 271 |
+
|
| 272 |
+
Related Work:
|
| 273 |
+
White Paper: "Tonality as Attention" (Polhill, 2025)
|
| 274 |
+
DOI: https://doi.org/10.5281/zenodo.17410581
|
| 275 |
+
|
| 276 |
+
|
| 277 |
+
CITATION
|
| 278 |
+
--------
|
| 279 |
+
|
| 280 |
+
BibTeX:
|
| 281 |
+
@dataset{polhill_2026_tonalityprint,
|
| 282 |
+
author = {Polhill, Ronda},
|
| 283 |
+
title = {TonalityPrint Voice Dataset v1.0},
|
| 284 |
+
year = 2026,
|
| 285 |
+
publisher = {Zenodo},
|
| 286 |
+
version = {1.0.0},
|
| 287 |
+
doi = {10.5281/zenodo.17913895},
|
| 288 |
+
url = {https://doi.org/10.5281/zenodo.17913895}
|
| 289 |
+
}
|
| 290 |
+
|
| 291 |
+
APA:
|
| 292 |
+
Polhill, R. (2026). TonalityPrint Voice Dataset v1.0 [Data set]. Zenodo.
|
| 293 |
+
https://doi.org/10.5281/zenodo.17913895
|
| 294 |
+
|
| 295 |
+
|
| 296 |
+
---
|
| 297 |
+
END OF MANIFEST
|
| 298 |
+
TonalityPrint Voice Dataset v1.0
|
| 299 |
+
Version 1.0.0 | January 24, 2026
|
| 300 |
+
DOI: https://doi.org/10.5281/zenodo.17913895
|
| 301 |
+
---
|
METHODOLOGY.md
ADDED
|
@@ -0,0 +1,579 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
\# METHODOLOGY \- TonalityPrint Voice Dataset v1.0
|
| 2 |
+
|
| 3 |
+
\#\# Research Framework
|
| 4 |
+
|
| 5 |
+
\#\#\# Theoretical Foundation: "Tonality as Attention"
|
| 6 |
+
|
| 7 |
+
The TonalityPrint Voice Dataset supports the *Tonality as Attention* theoretical framework, developed by researcher Ronda Polhill, which proposes that human vocal tonality may serve as a primary mechanism for directing and modulating attention in human-AI communications.
|
| 8 |
+
|
| 9 |
+
Further, unlike traditional emotion datasets that ssuccessfully categorize static affective states (e.g., "Happy," "Sad"), the TonalityPrint specialized corpus focuses on **Functional Tonal Intents** \- active signals used to orient focus, calibrate trust, regulate reciprocity, and signal cognitive state during complex dialogue. The dataset is designed to support **Differential Latent Analysis (DLA)**, a hypothesized protocol for isolating these socio-pragmatic features by holding lexical content and speaker identity for existing contrastive steering methods .
|
| 10 |
+
|
| 11 |
+
\---
|
| 12 |
+
|
| 13 |
+
\#\# Data Collection
|
| 14 |
+
|
| 15 |
+
\#\#\# Recording Environment
|
| 16 |
+
|
| 17 |
+
\*\*Location\*\*: Controlled home studio
|
| 18 |
+
\*\*Acoustic Treatment\*\*: Minimal ambient noise, consistent conditions across all recordings
|
| 19 |
+
\*\*Speaker Position\*\*: Seated, consistent positioning maintained throughout recording sessions
|
| 20 |
+
|
| 21 |
+
\*\*Recording Equipment\*\*:
|
| 22 |
+
\- \*\*Microphone\*\*: Blue Yeti USB microphone
|
| 23 |
+
\- Mode: Cardioid (directional)
|
| 24 |
+
\- Distance: \~6-8 inches from speaker
|
| 25 |
+
\- \*\*Recording Software\*\*: Audacity
|
| 26 |
+
\- Real-time effects: Disabled (to preserve original tonality signal)
|
| 27 |
+
\- Preset settings: Consistent across all recordings
|
| 28 |
+
|
| 29 |
+
\*\*Technical Specifications\*\*:
|
| 30 |
+
\- \*\*Recording Format\*\*: 48kHz, 32-bit float WAV (captured in Audacity)
|
| 31 |
+
\- \*\*Output Format\*\*: 48kHz, 16-bit PCM WAV (uncompressed)
|
| 32 |
+
\- \*\*Sample Rate\*\*: 48,000 Hz (48 kHz)
|
| 33 |
+
\- \*\*Bit Depth\*\*: 16-bit (final output)
|
| 34 |
+
\- \*\*Channels\*\*: Mono (1 channel)
|
| 35 |
+
\- \*\*File Format\*\*: WAV (uncompressed PCM)
|
| 36 |
+
|
| 37 |
+
\*\*Post-Processing Policy\*\*:
|
| 38 |
+
To preserve 100% human tonality variance and support maximum fidelity for micro-tonal expression analysis, this dataset provides \*\*raw, unprocessed audio files\*\*:
|
| 39 |
+
|
| 40 |
+
* **Processing:** **None.** No EQ, compression, or noise reduction was applied. Real-time effects were intentionally disabled to preserve raw tonal fidelity for analysis.
|
| 41 |
+
|
| 42 |
+
\*\*Note\*\*: Minimal background noise may be present in some recordings. This was intentional to avoid altering nuanced vocal tonality through post-processing artifacts.
|
| 43 |
+
|
| 44 |
+
\#\#\# Speaker Information
|
| 45 |
+
|
| 46 |
+
\*\*Speaker\*\*: Ronda (single speaker dataset)
|
| 47 |
+
\*\*Language\*\*: Native English speaker, Neutral/Mobile American Accent
|
| 48 |
+
\*\*Speaker Characteristics\*\*:
|
| 49 |
+
\- Experienced in vocal tonality modulation
|
| 50 |
+
\- Developed the "Tonality as Attention" framework
|
| 51 |
+
|
| 52 |
+
\*\*Speaker Consent\*\*: Full informed consent obtained for recording, annotation, and public dataset release.
|
| 53 |
+
|
| 54 |
+
\#\#\# Recording Procedure
|
| 55 |
+
|
| 56 |
+
\#\#\#\# Utterance
|
| 57 |
+
The dataset was collected across \*\*6 batches\*\* (B1-B6), with each batch containing \*\*18 utterances\*\*.
|
| 58 |
+
|
| 59 |
+
\*\*Timeline\*\*:
|
| 60 |
+
\- First recordings: December 2025
|
| 61 |
+
\- Final recordings: January 2026
|
| 62 |
+
\- Total collection period: \~1 month
|
| 63 |
+
|
| 64 |
+
\*\*Dataset Statistics\*\*:
|
| 65 |
+
\- \*\*Total Files\*\*: 144 audio samples
|
| 66 |
+
\- \*\*Duration per File\*\*: 3-6 seconds (approximately)
|
| 67 |
+
\- \*\*Total Duration\*\*: \~11 minutes 5 seconds
|
| 68 |
+
\- \*\*Single Speaker\*\*: All files recorded by Ronda
|
| 69 |
+
|
| 70 |
+
\#\#\#\# Utterance Design:
|
| 71 |
+
|
| 72 |
+
##
|
| 73 |
+
|
| 74 |
+
The core structure of the corpus is the "**Fixed-Phrase Octet**." This design explores lexical and biometric variability to isolate prosodic intent as the primary variable.
|
| 75 |
+
|
| 76 |
+
* **Structure:** 18 utterances X 8 parallel prosodic states.
|
| 77 |
+
|
| 78 |
+
Each utterance was deliberately crafted to express specific tonality intentions, the **8 Parallel States,** recorded as follows::
|
| 79 |
+
|
| 80 |
+
1\. \*\*Baseline Creation\*\*: (1) Neutral baseline utterances established for comparison
|
| 81 |
+
2\. \*\*Intention Targeting\*\*: (5) Utterances designed to emphasize specific tonality dimensions:
|
| 82 |
+
\- Attention (Att)
|
| 83 |
+
\- Trust (Trus)
|
| 84 |
+
\- Reciprocity (Reci)
|
| 85 |
+
\- Empathy Resonance (Emre)
|
| 86 |
+
\- Cognitive Energy (Cogen)
|
| 87 |
+
|
| 88 |
+
3\. \*\*Modifier Application\*\*: (1) Sub-category modifiers added nuance:
|
| 89 |
+
\- Affirmative, collaborative, calm, corrective, engaged, etc.
|
| 90 |
+
|
| 91 |
+
4\. \*\*Ambivalence Encoding\*\*: (1) Select utterances intentionally crafted to express complex or mixed tonality (marked as "ambivalex")
|
| 92 |
+
|
| 93 |
+
\#\#\#\# Recording Protocol
|
| 94 |
+
|
| 95 |
+
1\. \*\*Pre-Recording Setup\*\*:
|
| 96 |
+
\- Blue Yeti microphone positioned 6-8 inches from speaker
|
| 97 |
+
\- Cardioid mode selected for directional recording
|
| 98 |
+
\- Audacity configured: 48kHz, 32-bit float, mono
|
| 99 |
+
\- Real-time effects disabled to preserve natural tonality
|
| 100 |
+
\- Audio levels calibrated to avoid clipping
|
| 101 |
+
\- Speaker reviews utterance text
|
| 102 |
+
\- Speaker mentally prepares tonality intention (Primary \+ Modifier \+ Ambivalence if applicable)
|
| 103 |
+
|
| 104 |
+
2\. \*\*Recording Capture\*\*:
|
| 105 |
+
\- Speaker delivers utterance with intended tonality
|
| 106 |
+
\- Recording captured at 48kHz, 32-bit float in Audacity
|
| 107 |
+
\- Minimal silence at beginning and end (natural speech boundaries)
|
| 108 |
+
\- No real-time processing applied during capture
|
| 109 |
+
\- Raw audio preserved without noise reduction or effects
|
| 110 |
+
|
| 111 |
+
3\. \*\*Immediate Quality Check\*\*:
|
| 112 |
+
\- Playback review immediately after recording
|
| 113 |
+
\- Check for technical issues (clipping, background noise, mic artifacts)
|
| 114 |
+
\- Re-recording if necessary to meet quality standards
|
| 115 |
+
\- Final approval by speaker/researcher
|
| 116 |
+
|
| 117 |
+
4\. \*\*Export & File Naming\*\*:
|
| 118 |
+
\- Export from Audacity as 16-bit PCM WAV (48kHz, mono)
|
| 119 |
+
\- No post-processing, normalization, or effects applied
|
| 120 |
+
\- Files named using systematic convention:
|
| 121 |
+
\- Format: \`TPV1\_\[Batch\]\_\[Utterance\]\_\[Type\]\_\[Intent\]\_\[Modifier\]\_\[Ambivalex\]\_SP-Ronda.wav\`
|
| 122 |
+
\- Names encode: Version, Batch, Utterance number, Type, Intention, Modifier, Ambivalence, Speaker
|
| 123 |
+
\---
|
| 124 |
+
|
| 125 |
+
\#\# Annotation Methodology
|
| 126 |
+
|
| 127 |
+
### **A. Functional \-** Defined not by *feeling*, but by *doing*
|
| 128 |
+
|
| 129 |
+
* **Functional Tonal Intents:** 5 primary functional tonal intents
|
| 130 |
+
* Sub-modifiers: 24 optional sub-modifiers
|
| 131 |
+
* **Cross modifier:** Ambivalence (annotated ambivalex when applicable), treated as perceptual entropy, cross-intent feature rather than "noise." It represents transitional states where the speaker simultaneously expresses competing intentions (e.g., "Trust" but "Guarded"), effectively modeling uncertainty.
|
| 132 |
+
|
| 133 |
+
**B. Multi-layered, Human-in-the-Loop (HITL**) \- TonalityPrint v1.0 utilizes an annotation architecture. This process aims to ensure that primary Functional Tonal Intent labels are grounded in both real-world performance capability and objective spectral data.
|
| 134 |
+
|
| 135 |
+
\#\#\# Expert Practitioner Annotation
|
| 136 |
+
Annotations were not derived from post-hoc labeling of random speech, rather from a **practitioner-verified forward protocol** grounded in high-stakes interaction outcomes.
|
| 137 |
+
|
| 138 |
+
\*\*Annotator\*\*: Ronda Polhill (speaker and dataset creator)
|
| 139 |
+
\*\*Expertise\*\*: Expert practitioner, architect of the "Tonality as Attention" framework with real-world application
|
| 140 |
+
|
| 141 |
+
\*\*Further Practitioner Background\*\*:
|
| 142 |
+
\- \*\*Experience Base\*\*: 8,873+ high-stakes customer interactions (July 2024 \- March 2025\)
|
| 143 |
+
\- ***Generative Hypothesis, Not Causal Proof:*** \*\*Performance Context\*\*: \~35.85% average conversion rate during observation period
|
| 144 |
+
\- \*\*Tonality Expertise\*\*: Documented ability to modulate tonality adaptively in consequential interactions
|
| 145 |
+
\- \*\*Framework Application\*\*: Practical experience developing/implementing "Tonality as Attention" principles in real-time
|
| 146 |
+
|
| 147 |
+
\*\*Ecological Provenance\*\*:
|
| 148 |
+
|
| 149 |
+
This dataset is grounded in **ecological feasibility**.
|
| 150 |
+
|
| 151 |
+
Annotations reflect tonality patterns motivated by real-world deployment rather than theoretical constructs. The practitioner's annotation decisions are informed by:
|
| 152 |
+
\- Observed correlations between specific tonal patterns and interaction outcomes
|
| 153 |
+
\- Trial-and-error refinement across thousands of high-stakes conversations
|
| 154 |
+
\- Direct feedback from 168+ customer interactions with 68 unsolicited comments about *’AI-adjacent, yet trusted’* voice tone quality
|
| 155 |
+
|
| 156 |
+
* **Practitioner Note:** A subset of interactions (*n=68*) involved spontaneous listener feedback describing the voice as "AI-adjacent" or "robotic" while maintaining high trust. This counter-intuitive finding \- that "robotic" precision can co-occur with trust \- motivated the rigorous isolation of TonalityPrint’s specific functional Primary Tonal Intent states.
|
| 157 |
+
|
| 158 |
+
\*\*Annotation Method\*\*: Expert perceptual assessment combined with acoustic analysis
|
| 159 |
+
\*\*Source Designation\*\*: "Recording \- Expert Practitioner Annotator"
|
| 160 |
+
|
| 161 |
+
\#\#\# Annotation Process
|
| 162 |
+
|
| 163 |
+
\#\#\#\# 1\. Practitioner Perceptual Scoring
|
| 164 |
+
|
| 165 |
+
\*\*Primary Method\*\*: Expert perceptual assessment
|
| 166 |
+
The practitioner (Ronda) scored each utterance based on:
|
| 167 |
+
\- Intensive familiarity with tonality dimensions from real-world application
|
| 168 |
+
\- Perceptual assessment of tonal intent as expressed in the recording
|
| 169 |
+
\- Reference to internal calibration developed through 8,873+ customer interactions
|
| 170 |
+
|
| 171 |
+
\*\*Scoring Protocol\*\*:
|
| 172 |
+
\- Each utterance reviewed immediately after recording
|
| 173 |
+
\- All five tonality indices scored independently on 0-100 scale
|
| 174 |
+
\- Primary intention category and modifiers assigned
|
| 175 |
+
\- Ambivalence marker applied when competing tonal cues detected
|
| 176 |
+
\- Notes added for quality observations or systematic patterns
|
| 177 |
+
|
| 178 |
+
\#\#\#\# 2\. Acoustic Analysis Support
|
| 179 |
+
|
| 180 |
+
While primary scoring was perceptual, acoustic features were considered including:
|
| 181 |
+
\- Fundamental frequency (F0) patterns and pitch contours
|
| 182 |
+
\- Speech rate and temporal dynamics
|
| 183 |
+
\- Energy contours and amplitude variations
|
| 184 |
+
\- Vocal quality and resonance characteristics
|
| 185 |
+
|
| 186 |
+
\#\#\#\# 3\. Quality Control \- Proprietary Heuristic Audit
|
| 187 |
+
|
| 188 |
+
\*\*Audit Process\*\*:
|
| 189 |
+
After initial annotation, all samples underwent blind, proprietary heuristic audit to verify consistency:
|
| 190 |
+
\- Acoustic profiles analyzed without access to practitioner labels
|
| 191 |
+
\- Samples flagged when acoustic features diverged from stated intention
|
| 192 |
+
\- Flagged samples reviewed for potential re-recording
|
| 193 |
+
|
| 194 |
+
\*\*Audit Results\*\*:
|
| 195 |
+
\- \*\*\~80+% alignment rate\*\*: Acoustic profiles matched intended tonal intent categories
|
| 196 |
+
\- \*\*\~18.05% re-recorded\*\*: Samples where acoustic features diverged were re-recorded
|
| 197 |
+
\- \*\*Cross-intent patterns\*\*: Cognitive Energy systematically elevated (intentionally retained)
|
| 198 |
+
|
| 199 |
+
\*\*Resolution Process\*\*:
|
| 200 |
+
\- Samples with acoustic-intent misalignment were reviewed
|
| 201 |
+
\- If acoustic profile didn't support intended tonality, utterance was re-recorded
|
| 202 |
+
\- Some divergences retained as genuine ambivalence or tonal complexity
|
| 203 |
+
\- All decisions documented in Notes field
|
| 204 |
+
|
| 205 |
+
\#\#\#\# 4\. Tonality Index Scoring
|
| 206 |
+
|
| 207 |
+
Each utterance receives five tonality index scores (0-100 scale) based on expert practitioner assessment:
|
| 208 |
+
|
| 209 |
+
\*\*Trust Index (0-100)\*\*:
|
| 210 |
+
\- \*\*Definition\*\*: Perceived safety, authenticity, stability, or credibility conveyed through tonal authority and controlled resonance
|
| 211 |
+
\- \*\*Perceptual Indicators\*\*: Vocal steadiness, warm resonance, consistent pitch, relaxed quality
|
| 212 |
+
\- \*\*Interpretation\*\*:
|
| 213 |
+
\- Low (0-33): Uncertain, hesitant
|
| 214 |
+
\- Moderate (34-66): Moderately reliable
|
| 215 |
+
\- High (67-100): Highly trustworthy
|
| 216 |
+
|
| 217 |
+
\*\*Reciprocity Index (0-100)\*\*:
|
| 218 |
+
\- \*\*Definition\*\*: How tonality invites response, signals openness, and creates conversational balance rather than dominance
|
| 219 |
+
\- \*\*Perceptual Indicators\*\*: Invitational intonation, cooperative prosody, turn-taking signals
|
| 220 |
+
\- \*\*Interpretation\*\*:
|
| 221 |
+
\- Low (0-33): Unilateral, one-sided
|
| 222 |
+
\- Moderate (34-66): Somewhat collaborative
|
| 223 |
+
\- High (67-100): Highly collaborative, balanced
|
| 224 |
+
|
| 225 |
+
\*\*Empathy Resonance Index (0-100)\*\*:
|
| 226 |
+
\- \*\*Definition\*\*: Function of emotional attunement where vocal tone mirrors or harmonizes to perceived listener state
|
| 227 |
+
\- \*\*Perceptual Indicators\*\*: Warm tone, gentle inflections, emotional openness, attuned quality
|
| 228 |
+
\- \*\*Interpretation\*\*:
|
| 229 |
+
\- Low (0-33): Detached, impersonal
|
| 230 |
+
\- Moderate (34-66): Moderately attuned
|
| 231 |
+
\- High (67-100): Highly empathetic, resonant
|
| 232 |
+
|
| 233 |
+
\*\*Cognitive Energy Index (0-100)\*\*:
|
| 234 |
+
\- \*\*Definition\*\*: Activation and momentum; tonal pacing, rhythm, and emphasis patterns signaling cognitive load or intent
|
| 235 |
+
\- \*\*Perceptual Indicators\*\*: Speech rate, articulation precision, dynamic energy, mental engagement
|
| 236 |
+
\- \*\*Interpretation\*\*:
|
| 237 |
+
\- Low (0-33): Low engagement, slow pacing
|
| 238 |
+
\- Moderate (34-66): Moderate processing
|
| 239 |
+
\- High (67-100): High mental energy, dynamic
|
| 240 |
+
\- \*\*Known Issue\*\*: Shows systematic elevation (\~90-100) across most utterances, possibly due to speaker's natural "AI-adjacent" prosodic style. Intentionally retained for transparency.
|
| 241 |
+
|
| 242 |
+
\*\*Attention Index (0-100)\*\*:
|
| 243 |
+
\- \*\*Definition\*\*: How effectively tonality orients focus, directs perceptual priority, and maintains engagement
|
| 244 |
+
\- \*\*Perceptual Indicators\*\*: Clarity, emphasis patterns, salience markers, commanding quality
|
| 245 |
+
\- \*\*Interpretation\*\*:
|
| 246 |
+
\- Low (0-33): Unfocused, diffuse
|
| 247 |
+
\- Moderate (34-66): Moderately engaging
|
| 248 |
+
\- High (67-100): Highly focused, attention-commanding
|
| 249 |
+
|
| 250 |
+
\*\*Scoring Notes\*\*:
|
| 251 |
+
\- All scores reflect practitioner's expert perceptual assessment
|
| 252 |
+
\- Scores informed by 8,873+ customer interactions where similar patterns correlated with measurable outcomes
|
| 253 |
+
\- Not algorithmically derived; represent human expert judgment
|
| 254 |
+
\- Continuous 0-100 scale enables gradient analysis beyond categorical classification
|
| 255 |
+
|
| 256 |
+
\---
|
| 257 |
+
|
| 258 |
+
\#\#\# Ambivalence Annotation
|
| 259 |
+
|
| 260 |
+
Most prosody and emotion recognition datasets treat mixed or contradictory tonal signals as \*\*annotation errors\*\* or \*\*noise to be eliminated\*\*. TonalityPrint takes the opposite approach: \*\*ambivalence is systematically annotated as a feature, not a bug\*\*.
|
| 261 |
+
|
| 262 |
+
This represents a fundamental shift in how vocal tonality complexity is captured and understood in voice AI. Real-world communication frequently involves simultaneous, competing tonal intentions \- e.g., warmth mixed with caution, confidence mixed with uncertainty, engagement mixed with reservation. By explicitly and systematically marking and preserving these ambivalent states, TonalityPrint potentially provides researchers with the substrate to study tonal complexity as it naturally occurs.
|
| 263 |
+
|
| 264 |
+
\#\#\#\# What is Ambivalence in the TonalityPrint Framework?
|
| 265 |
+
|
| 266 |
+
\*\*Definition\*\*:
|
| 267 |
+
Tonalityprint proposes to define Ambivalence as occurring when \*\*two or more contradictory or competing tonal sub-modifier layers are present almost simultaneously\*\* within a single utterance. These competing signals are expressed subtly and realistically through micro-mixed acoustic cues that create tonal complexity.
|
| 268 |
+
|
| 269 |
+
\*\*Key Characteristics\*\*:
|
| 270 |
+
\- Not a binary "mixed emotion" but \*\*nuanced layering\*\* of competing prosodic signals
|
| 271 |
+
\- Present at the sub-modifier level (e.g., warm \+ cautious, engaged \+ hesitant)
|
| 272 |
+
\- Reflects \*\*authentic human communication\*\* where intentions are rarely pure or singular
|
| 273 |
+
\- Occurs across all five primary tonal intents (Trust, Attention, Reciprocity, Empathy Resonance, Cognitive Energy)
|
| 274 |
+
|
| 275 |
+
\*\*Examples of Ambivalent Tonality\*\*:
|
| 276 |
+
1\. \*\*Reciprocity \+ Engaged \+ Caution\*\*: Warm, invitational prosody with subtle markers of reservation or uncertainty
|
| 277 |
+
2\. \*\*Trust \+ Confident \+ Doubt\*\*: Authoritative tone with micro-hesitations or slight pitch instability
|
| 278 |
+
3\. \*\*Empathy Resonance \+ Warm \+ Concern\*\*: Emotionally attuned with underlying worry or apprehension
|
| 279 |
+
4\. \*\*Attention \+ Focused \+ Reluctance\*\*: Clear, directed communication with subtle withdrawal cues
|
| 280 |
+
5\. \*\*Cognitive Energy \+ Enthusiastic \+ Skeptical\*\*: High energy with questioning or disbelief undertones
|
| 281 |
+
|
| 282 |
+
\*\*Nuanced Cues Captured\*\*:
|
| 283 |
+
Ambivalence annotation captures subtle acoustic markers including:
|
| 284 |
+
\- Concern (empathetic worry layered into otherwise neutral delivery)
|
| 285 |
+
\- Disbelief (skepticism mixed with engagement)
|
| 286 |
+
\- Doubt (uncertainty within otherwise confident tonality)
|
| 287 |
+
\- Hesitancy (pause or tempo markers within fluid speech)
|
| 288 |
+
\- Regret (backward-looking tonality mixed with forward action)
|
| 289 |
+
\- Reluctance (resistance cues within cooperative prosody)
|
| 290 |
+
\- Worry (anticipatory concern within supportive tonality)
|
| 291 |
+
|
| 292 |
+
\#\#\#\# Ambivalence Detection Methodology
|
| 293 |
+
\*\*How Ambivalence is Hypothetically Identified\*\*:
|
| 294 |
+
|
| 295 |
+
The practitioner (Ronda) identifies ambivalence through a combination of:
|
| 296 |
+
|
| 297 |
+
1\. \*\*Intentional Design\*\* (Pre-Recording):
|
| 298 |
+
\- Some utterances deliberately crafted to express ambivalent tonality
|
| 299 |
+
\- Complex utterances designed with: Primary Intent \+ Sub-modifier \+ Ambivalence layer
|
| 300 |
+
\- Example: "Trust \+ Calm \+ Ambivalence" requires delivering trustworthy, calm prosody with subtle competing uncertainty cues
|
| 301 |
+
|
| 302 |
+
2\. \*\*Real-Time Perceptual Assessment\*\* (During Recording):
|
| 303 |
+
\- Practitioner monitors for unintended competing tonal signals
|
| 304 |
+
\- Detects when acoustic delivery includes contradictory prosodic cues
|
| 305 |
+
\- Recognizes when utterance contains layered, mixed intentions
|
| 306 |
+
|
| 307 |
+
3\. \*\*Post-Recording Review\*\* (Annotation Phase):
|
| 308 |
+
\- Playback analysis identifies subtle competing signals
|
| 309 |
+
\- Practitioner evaluates whether mixed cues were intentional or artifacts
|
| 310 |
+
\- Decision made to mark as ambivalent vs. re-record
|
| 311 |
+
|
| 312 |
+
\*\*Decision Criteria for Ambivalence Marking\*\*:
|
| 313 |
+
|
| 314 |
+
An utterance receives the \`ambivalex\` marker when:
|
| 315 |
+
\- Two or more competing sub-modifier cues are clearly present
|
| 316 |
+
\- The mixed signals are subtle enough to be realistic (not exaggerated)
|
| 317 |
+
\- The ambivalence serves a communicative purpose (not technical error)
|
| 318 |
+
\- The acoustic profile contains identifiable markers of both/all competing intentions
|
| 319 |
+
\- The practitioner can articulate which specific tonal layers are competing
|
| 320 |
+
|
| 321 |
+
An utterance is \*\*NOT\*\* marked as ambivalent when:
|
| 322 |
+
\- Mixed signals are due to technical recording issues (mic artifacts, noise)
|
| 323 |
+
\- Competing cues are so subtle they're indistinguishable from baseline
|
| 324 |
+
\- The ambivalence is unintentional and not representative of target tonality
|
| 325 |
+
\- Re-recording can produce clearer, less ambiguous version
|
| 326 |
+
|
| 327 |
+
\#\#\#\# Ambivalence Annotation Process
|
| 328 |
+
|
| 329 |
+
\*\*Step-by-Step Workflow\*\*:
|
| 330 |
+
|
| 331 |
+
1\. \*\*Utterance Design\*\* (for intentional ambivalence):
|
| 332 |
+
\- Identify target primary intention (e.g., Reciprocity)
|
| 333 |
+
\- Select primary sub-modifier (e.g., Engaged)
|
| 334 |
+
\- Add ambivalence layer (e.g., subtle caution/reservation markers)
|
| 335 |
+
\- Mental preparation: Hold both/all tonal intentions simultaneously during delivery
|
| 336 |
+
|
| 337 |
+
2\. \*\*Recording Execution\*\*:
|
| 338 |
+
\- Deliver utterance with intentional tonal layering
|
| 339 |
+
\- Maintain primary intention while introducing competing cues
|
| 340 |
+
\- Keep competing signals subtle and realistic (not theatrical)
|
| 341 |
+
|
| 342 |
+
3\. \*\*Immediate Review\*\*:
|
| 343 |
+
\- Playback immediately after recording
|
| 344 |
+
\- Assess: Are both/all intended tonal layers audibly present?
|
| 345 |
+
\- Assess: Does the ambivalence sound natural or forced?
|
| 346 |
+
\- Decision: Accept, re-record, or adjust ambivalence marker
|
| 347 |
+
|
| 348 |
+
4\. \*\*Annotation\*\*:
|
| 349 |
+
\- Primary\_Intention field: Dominant tonal intent (e.g., "Reciprocity")
|
| 350 |
+
\- Sub\_Modifier field: Primary sub-modifier (e.g., "enga" for Engaged)
|
| 351 |
+
\- \*\*Ambivalex field\*\*: Marked as "ambivalex" if competing layers present
|
| 352 |
+
\- Notes field: Document which specific competing cues are present
|
| 353 |
+
|
| 354 |
+
5\. \*\*File Naming\*\*:
|
| 355 |
+
\- Complex utterances with ambivalence receive \`ambivalex\` marker in filename
|
| 356 |
+
\- Example: \`TPV1\_B1\_UTT1\_S\_Reci\_enga\_ambivalex\_SP-Ronda.wav\`
|
| 357 |
+
\- This enables easy filtering and analysis of ambivalent samples
|
| 358 |
+
|
| 359 |
+
\*\*Validation in Quality Control Process\*\*:
|
| 360 |
+
|
| 361 |
+
During the proprietary heuristic audit (\~18.05% of corpus re-recorded):
|
| 362 |
+
|
| 363 |
+
\- \*\*Ambivalent samples received special scrutiny\*\*: Audit verified that acoustic features contained identifiable markers of competing tonal cues
|
| 364 |
+
\- \*\*Divergences sometimes indicated successful ambivalence\*\*: When acoustic profile showed "mixed signals," this was often correct annotation of ambivalence rather than error
|
| 365 |
+
\- \*\*Strategic retention\*\*: Some samples flagged as "divergent" were retained specifically because the acoustic-intent mismatch represented genuine ambivalent tonality
|
| 366 |
+
\- \*\*Documentation\*\*: All ambivalent samples have detailed notes explaining which competing cues are present
|
| 367 |
+
|
| 368 |
+
This means ambivalence survived the QC process when:
|
| 369 |
+
1\. Competing acoustic cues were clearly detectable
|
| 370 |
+
2\. Mixed signals were subtle enough to be realistic
|
| 371 |
+
3\. Ambivalence served communicative/research purpose
|
| 372 |
+
4\. Practitioner could articulate the specific tonal layers
|
| 373 |
+
|
| 374 |
+
\#\#\#\# Prevalence and Distribution
|
| 375 |
+
|
| 376 |
+
\*\*Dataset Statistics\*\* (estimated from corpus structure):
|
| 377 |
+
\- Ambivalent samples represent a \*\*minority class\*\* in the dataset
|
| 378 |
+
\- Each batch (18 utterances) includes select ambivalent samples
|
| 379 |
+
\- Not all primary intentions or sub-modifiers include ambivalent versions
|
| 380 |
+
\- Strategic sampling: Ambivalence captured where most relevant/realistic
|
| 381 |
+
|
| 382 |
+
\*\*File Naming Pattern\*\*:
|
| 383 |
+
\- Single: \`TPV1\_B1\_UTT1\_S\_Att\_SP-Ronda.wav\` (Primary only)
|
| 384 |
+
\- Compound: \`TPV1\_B1\_UTT1\_S\_Reci\_affi\_SP-Ronda.wav\` (Primary \+ Sub-modifier)
|
| 385 |
+
\- \*\*Complex (Ambivalent)\*\*: \`TPV1\_B1\_UTT1\_S\_Reci\_affi\_ambivalex\_SP-Ronda.wav\` (Primary \+ Sub-modifier \+ Ambivalence)
|
| 386 |
+
|
| 387 |
+
\#\#\#\# Why This Matters Now
|
| 388 |
+
|
| 389 |
+
\*\*Competitive Advantage\*\*:
|
| 390 |
+
|
| 391 |
+
2\. \*\*Ecologically Valid\*\*: Reflects real-world communication where pure emotional/tonal states are rare
|
| 392 |
+
3\. \*\*Research Enabler\*\*: Aims to support new research directions in tonal complexity
|
| 393 |
+
4\. \*\*AI Alignment\*\*: Potentially necessary for fine-tuning AI systems to recognize human communication complexity for better trust, attunement and reciprocity.
|
| 394 |
+
5\. \*\*Commercial Value\*\*: Potential for high-stakes applications (e.g., customer service, healthcare, negotiation, autonomous systems) where detecting mixed signals is crucial
|
| 395 |
+
|
| 396 |
+
\*\*Contrast with Existing Datasets\*\*:
|
| 397 |
+
|
| 398 |
+
Most emotion/prosody datasets:
|
| 399 |
+
\- Treat ambiguity as annotation disagreement (noise)
|
| 400 |
+
\- Force annotators to choose single dominant emotion
|
| 401 |
+
\- Discard samples with mixed signals
|
| 402 |
+
\- Aim for high inter-rater agreement (which requires ignoring complexity)
|
| 403 |
+
|
| 404 |
+
TonalityPrint:
|
| 405 |
+
\- Treats ambivalence as signal (feature)
|
| 406 |
+
\- Explicitly marks competing tonal layers
|
| 407 |
+
\- Aims to preserve samples with intentional mixed signals
|
| 408 |
+
\- Single expert annotator can capture nuance that multi-rater consensus would average out
|
| 409 |
+
|
| 410 |
+
\*\*Research Applications Potentially Enabled by Functional Tonal Intent and Ambivalence Annotation\*\*:
|
| 411 |
+
|
| 412 |
+
1\. \*\*Ambivalence Detection Models\*\*: precision-tuning classifiers to identify mixed/transitional tonal states
|
| 413 |
+
2\. \*\*Tonal Complexity Analysis\*\*: Study how competing prosodic signals interact acoustically
|
| 414 |
+
3\. \*\*Real-World Tonality Modeling\*\*: Move beyond pure categorical states to realistic mixed intentions
|
| 415 |
+
4\. \*\*Inference-Time Adaptation\*\*: Enable AI systems to recognize and respond appropriately to ambivalent human communication
|
| 416 |
+
5\. \*\*Emotional Granularity\*\*: Investigate fine-grained affective states beyond basic emotion categories
|
| 417 |
+
6\. \*\*Trust & Safety\*\*: Detect uncertainty or hesitation in otherwise confident-sounding speech (e.g., hallucination detection, safety-critical systems,“Soft refusals”)
|
| 418 |
+
7\. \*\*Human-Robot Interaction\*\*: Enable social robots to recognize and navigate complex human tonal states
|
| 419 |
+
8\. \*\*Clinical Applications\*\*: Study ambivalence in therapeutic contexts (e.g., motivational interviewing, trauma recovery)
|
| 420 |
+
|
| 421 |
+
\*\*Empirical Grounding\*\*:
|
| 422 |
+
|
| 423 |
+
The ambivalence annotation methodology is grounded in Ronda's observation of \*\*8,873+ real-world customer interactions\*\* where:
|
| 424 |
+
\- Mixed tonal signals frequently occurred in high-stakes conversations
|
| 425 |
+
\- Ambivalent tonality potentially correlated with specific conversational outcomes
|
| 426 |
+
\- Customers may have responded differently to pure vs. ambivalent tonal states
|
| 427 |
+
\- The ability to navigate tonal complexity may have been associated with successful interactions
|
| 428 |
+
|
| 429 |
+
This real-world foundation motivated annotating ambivalence to possibly reflect \*\*authentic communication patterns\*\*, not artificial laboratory constructs.
|
| 430 |
+
|
| 431 |
+
\---
|
| 432 |
+
|
| 433 |
+
\#\#\#\# Segment-Level Temporal Analysis
|
| 434 |
+
|
| 435 |
+
Each utterance includes time-aligned segment data with millisecond precision:
|
| 436 |
+
\- \*\*Segment Definition\*\*: Typically whole utterance as single segment (most files)
|
| 437 |
+
\- \*\*Temporal Boundaries\*\*: Start and end times recorded in milliseconds
|
| 438 |
+
\- \*\*Per-Segment Scoring\*\*: All five tonality indices scored for each segment
|
| 439 |
+
\- \*\*Data Structure\*\*: Stored as JSON array with startTime, endTime, and five index scores
|
| 440 |
+
\- \*\*Precision\*\*: Millisecond-level timestamps enable fine-grained temporal analysis
|
| 441 |
+
\- \*\*Purpose\*\*: Supports investigation of tonality dynamics within utterances
|
| 442 |
+
|
| 443 |
+
\*\*Example Segment Data\*\*:
|
| 444 |
+
\`\`\`json
|
| 445 |
+
\[{
|
| 446 |
+
"startTime": 0,
|
| 447 |
+
"endTime": 4284.083333333333,
|
| 448 |
+
"trust": 75,
|
| 449 |
+
"reciprocity": 93,
|
| 450 |
+
"empathy": 76,
|
| 451 |
+
"cognitive": 96,
|
| 452 |
+
"attention": 80
|
| 453 |
+
}\]
|
| 454 |
+
\`\`\`
|
| 455 |
+
|
| 456 |
+
\#\#\#\# Metadata Recording
|
| 457 |
+
|
| 458 |
+
For each utterance, the following metadata is captured:
|
| 459 |
+
\- Utterance text (transcription)
|
| 460 |
+
\- Utterance type (Statement/Question)
|
| 461 |
+
\- Primary intention category
|
| 462 |
+
\- Sub-modifier (if applicable)
|
| 463 |
+
\- Ambivalence marker (if applicable)
|
| 464 |
+
\- Temporal data (start, end, duration)
|
| 465 |
+
\- Recording date and processing timestamp
|
| 466 |
+
\- Annotator notes
|
| 467 |
+
|
| 468 |
+
\---
|
| 469 |
+
|
| 470 |
+
\#\#\#\# .Validation Procedures
|
| 471 |
+
|
| 472 |
+
\#\#\#\# 1\. Proprietary Heuristic Audit (Primary QC)
|
| 473 |
+
|
| 474 |
+
\*\*Blind Acoustic Validation\*\*:
|
| 475 |
+
After initial annotation, all 144 samples underwent blind, proprietary heuristic audit:
|
| 476 |
+
\- Acoustic profiles analyzed without access to practitioner labels
|
| 477 |
+
\- Script evaluated spectral variance (pitch contour, energy dynamics, etc.)
|
| 478 |
+
\- Samples flagged when acoustic features diverged from stated intention labels
|
| 479 |
+
|
| 480 |
+
\*\*Audit Results\*\*:
|
| 481 |
+
\- \*\*\~80+% alignment rate\*\*: Acoustic profiles matched intended tonal intent categories
|
| 482 |
+
\- \*\*\~18.05% re-recorded\*\*: Samples with acoustic-intent divergence were re-recorded for consistency
|
| 483 |
+
\- \*\*Cross-intent patterns detected\*\*: Cognitive Energy systematically elevated across corpus
|
| 484 |
+
|
| 485 |
+
\*\*Resolution Process\*\*:
|
| 486 |
+
\- Flagged samples reviewed by practitioner
|
| 487 |
+
\- If acoustic profile didn't support intended tonality → utterance re-recorded
|
| 488 |
+
\- Some divergences retained as genuine ambivalence or tonal complexity
|
| 489 |
+
\- All decisions documented in Notes field
|
| 490 |
+
|
| 491 |
+
\#\#\#\# 2\. Cross-Batch Consistency Checks
|
| 492 |
+
\- \*\*Similar Intentions\*\*: Compared across batches to ensure temporal stability
|
| 493 |
+
\- \*\*Baseline Stability\*\*: Neutral samples verified for consistent reference point
|
| 494 |
+
\- \*\*Index Relationships\*\*: Internal consistency of tonality indices reviewed
|
| 495 |
+
\- \*\*Pattern Recognition\*\*: Systematic patterns (e.g., CE elevation) identified and documented
|
| 496 |
+
|
| 497 |
+
\#\#\#\# 3\. Technical Validation
|
| 498 |
+
|
| 499 |
+
\- \*\*Audio Integrity\*\*: All WAV files checked for corruption or artifacts
|
| 500 |
+
\- \*\*Metadata Completeness\*\*: Verified all 23 variables present and valid
|
| 501 |
+
\- \*\*File Naming\*\*: 100% compliance with systematic convention
|
| 502 |
+
\- \*\*Temporal Alignment\*\*: Segment timestamps validated against audio duration
|
| 503 |
+
\- \*\*JSON Structure\*\*: Segment data verified for correct format and values
|
| 504 |
+
\---
|
| 505 |
+
\#\# Data Processing Pipeline
|
| 506 |
+
|
| 507 |
+
\#\#\# 1\. Recording Phase
|
| 508 |
+
\`\`\`
|
| 509 |
+
Speaker Preparation → Audio Recording (48kHz WAV) → Quality Check → File Naming → Storage
|
| 510 |
+
\`\`\`
|
| 511 |
+
|
| 512 |
+
\#\#\# 2\. Annotation Phase
|
| 513 |
+
\`\`\`
|
| 514 |
+
Audio Analysis → Tonality Scoring → Segment Analysis → Metadata Entry → Quality Review
|
| 515 |
+
\`\`\`
|
| 516 |
+
|
| 517 |
+
\#\#\# 3\. Export Phase
|
| 518 |
+
\`\`\`
|
| 519 |
+
JSON Generation → CSV Conversion → Combined Dataset Creation → Documentation → Packaging
|
| 520 |
+
\`\`\`
|
| 521 |
+
\---
|
| 522 |
+
|
| 523 |
+
\#\# Potential Reproducibility
|
| 524 |
+
|
| 525 |
+
\#\#\# Materials Provided
|
| 526 |
+
\- Complete audio recordings (WAV format)
|
| 527 |
+
\- Full annotation data (JSON and CSV formats)
|
| 528 |
+
\- Comprehensive codebook
|
| 529 |
+
\- Detailed methodology documentation
|
| 530 |
+
\- File naming conventions
|
| 531 |
+
\- Version control information
|
| 532 |
+
|
| 533 |
+
\#\#\# Replication Guidelines
|
| 534 |
+
|
| 535 |
+
To attempt to replicate this annotation approach:
|
| 536 |
+
1\. Review the full TonalityPrint README on Zenodo
|
| 537 |
+
2\. Fine-tune annotators in tonality perception and measurement
|
| 538 |
+
3\. Use consistent recording equipment and environment
|
| 539 |
+
4\. Follow the acoustic analysis protocols described above
|
| 540 |
+
5\. Implement systematic quality control procedures
|
| 541 |
+
|
| 542 |
+
\---
|
| 543 |
+
|
| 544 |
+
## **Ethical Framework**
|
| 545 |
+
|
| 546 |
+
* **Speaker Consent:** 100% of recordings are of the author (R. Polhill) with explicit informed consent for research use.
|
| 547 |
+
* **Biometric Integrity:** No synthetic voices, clones, or generative AI audio were used. The dataset is 100% human.
|
| 548 |
+
* **Deepfake Restriction:** Researchers are strictly prohibited from using this dataset to create unauthorized voice clones or deepfakes of the speaker.
|
| 549 |
+
|
| 550 |
+
## **Limitations and Considerations**
|
| 551 |
+
|
| 552 |
+
* **Single-Speaker:** While purposely controlled and specialized, results may not generalize across genders, accents, or cultures without further validation.
|
| 553 |
+
* **Observational Origin:** The correlation with conversion outcomes is observational and outcome-associated, not a controlled causal experiment.
|
| 554 |
+
* **Subjectivity:** Annotation relies on practitioner judgment and self-correction, which entails inherent subjective bias.
|
| 555 |
+
|
| 556 |
+
**Measurement Limitations:**
|
| 557 |
+
**Subjective Elements:** Tonality scoring includes perceptual assessment by expert annotator
|
| 558 |
+
**Cognitive Energy Bias**: Systematic elevation documented and retained
|
| 559 |
+
**Ambivalence Complexity**: Mixed-tonality utterances may require specialized analysis
|
| 560 |
+
|
| 561 |
+
**Quality Control and Systematic Bias Monitoring**
|
| 562 |
+
Known Issue \- Cognitive Energy Index:
|
| 563 |
+
The expert annotator identified systematic elevation in Cognitive Energy scores across the dataset. This pattern was attributed to:
|
| 564 |
+
\- Speaker's natural ecological style
|
| 565 |
+
\- Lexical content choices
|
| 566 |
+
\- Potential annotator perceptual bias
|
| 567 |
+
**\-Decision:** These elevated scores were intentionally retained for transparency rather than artificially adjusted.
|
| 568 |
+
\-**Documentation:** Individual notes field contains explanation for affected utterances.
|
| 569 |
+
|
| 570 |
+
## **For additional questions about methodology, annotation procedures, or data collection, please :**
|
| 571 |
+
|
| 572 |
+
\- See CODEBOOK.md for variable definitions
|
| 573 |
+
\- See README.md for dataset overview
|
| 574 |
+
\- Contact researcher for methodological inquiries
|
| 575 |
+
\-See detailed README available here on Zenodo https://doi.org/10.5281/zenodo.17913895
|
| 576 |
+
|
| 577 |
+
Version: 1.0
|
| 578 |
+
Last Updated: January 24, 2026
|
| 579 |
+
|
README.md
CHANGED
|
@@ -1,3 +1,472 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TonalityPrint Voice Dataset v1.0
|
| 2 |
+
|
| 3 |
+
[](https://doi.org/10.5281/zenodo.17913895)
|
| 4 |
+
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
| 5 |
+
|
| 6 |
+
**A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment**
|
| 7 |
+
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
## 📥 DOWNLOAD DATASET FILES
|
| 11 |
+
|
| 12 |
+
> **⚠️ This GitHub repository contains DOCUMENTATION ONLY.**
|
| 13 |
+
>
|
| 14 |
+
> **Download audio and annotation files from Zenodo:**
|
| 15 |
+
> **https://doi.org/10.5281/zenodo.17913895**
|
| 16 |
+
>
|
| 17 |
+
> See [DOWNLOAD_DATA.md](DOWNLOAD_DATA.md) for detailed instructions.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
yaml---
|
| 22 |
+
language:
|
| 23 |
+
- en
|
| 24 |
+
license: cc-by-nc-4.0
|
| 25 |
+
size_categories:
|
| 26 |
+
- n<1K
|
| 27 |
+
tags:
|
| 28 |
+
- prosody
|
| 29 |
+
- voice-dataset
|
| 30 |
+
- tonality
|
| 31 |
+
- ai-alignment
|
| 32 |
+
pretty_name: TonalityPrint Voice Dataset v1.0
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
# TonalityPrint Voice Dataset v1.0
|
| 36 |
+
|
| 37 |
+
[](https://doi.org/10.5281/zenodo.17913895)
|
| 38 |
+
|
| 39 |
+
## 📥 DOWNLOAD DATASET FILES
|
| 40 |
+
|
| 41 |
+
> **⚠️ This Hugging Face repository contains DOCUMENTATION ONLY.**
|
| 42 |
+
>
|
| 43 |
+
> **Download audio and annotation files from Zenodo (official source):**
|
| 44 |
+
> **https://doi.org/10.5281/zenodo.17913895**
|
| 45 |
+
>
|
| 46 |
+
> ### Why Zenodo?
|
| 47 |
+
> - ✅ Official DOI and citations
|
| 48 |
+
> - ✅ Permanent archival storage
|
| 49 |
+
> - ✅ Download statistics for grant reporting
|
| 50 |
+
> - ✅ Academic credibility
|
| 51 |
+
>
|
| 52 |
+
> See instructions below for downloading from Zenodo.
|
| 53 |
+
|
| 54 |
+
---
|
| 55 |
+
|
| 56 |
+
## Overview
|
| 57 |
+
|
| 58 |
+
TonalityPrint is a specialized single-speaker speech corpus designed to enable exploration of fine-tuning **functional tonal intents** in voice AI systems. Unlike emotion recognition datasets, TonalityPrint annotates functional tonal intents (what speakers *do* with tone), not just what they *feel*.
|
| 59 |
+
|
| 60 |
+
**Key Features:**
|
| 61 |
+
- **144 high-fidelity WAV files** (48kHz, 16-bit, mono, unprocessed)
|
| 62 |
+
- **18 unique utterances** across **8 parallel prosodic states**
|
| 63 |
+
- **5 functional tonal intents**: Trust, Attention, Reciprocity, Empathy Resonance, Cognitive Energy
|
| 64 |
+
- **Continuous intensity indices** (0-100 scale) for each intent
|
| 65 |
+
- **Ambivalence annotation** (perceptual entropy cross-intent feature)
|
| 66 |
+
- **100% authentic human voice** with explicit consent
|
| 67 |
+
- **Single-speaker design** eliminates speaker variability for controlled analysis
|
| 68 |
+
|
| 69 |
+
**What This Dataset Is:**
|
| 70 |
+
- A precision-tuning resource for prosodic AI alignment research
|
| 71 |
+
- A controlled substrate for investigating functional tonal intent
|
| 72 |
+
- An experimental framework for ambivalence-aware dialogue systems
|
| 73 |
+
- A hypothesis-generating tool for human-AI voice calibration
|
| 74 |
+
|
| 75 |
+
**What This Dataset Is Not:**
|
| 76 |
+
- A general-purpose emotion recognition training corpus
|
| 77 |
+
- A multi-speaker dataset for population-level generalization
|
| 78 |
+
- A substitute for large-scale speech datasets
|
| 79 |
+
- A validated benchmark for production systems
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## Dataset Composition
|
| 84 |
+
|
| 85 |
+
### Structure
|
| 86 |
+
|
| 87 |
+
```
|
| 88 |
+
TonalityPrint/
|
| 89 |
+
├── audio/ # 144 WAV files
|
| 90 |
+
├── annotations/
|
| 91 |
+
│ ├── json/ # 144 JSON files
|
| 92 |
+
│ ├── csv/ # 144 CSV files
|
| 93 |
+
│ └── ALL_TONALITY_DATA_COMBINED.csv # Combined dataset
|
| 94 |
+
└── documentation/ # Technical references
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### Audio Specifications
|
| 98 |
+
|
| 99 |
+
| Specification | Value |
|
| 100 |
+
|--------------|-------|
|
| 101 |
+
| **Format** | WAV (uncompressed PCM) |
|
| 102 |
+
| **Sample Rate** | 48,000 Hz (48kHz) |
|
| 103 |
+
| **Bit Depth** | 16-bit |
|
| 104 |
+
| **Channels** | Mono (1 channel) |
|
| 105 |
+
| **Duration per File** | 3-6 seconds |
|
| 106 |
+
| **Total Duration** | ~11 minutes 5 seconds |
|
| 107 |
+
| **Processing** | None (raw, unprocessed) |
|
| 108 |
+
| **Total Files** | 144 audio samples |
|
| 109 |
+
|
| 110 |
+
### Fixed-Phrase Octet Design
|
| 111 |
+
|
| 112 |
+
The dataset uses a **Fixed-Phrase Octet** structure: 18 utterances × 8 parallel prosodic states.
|
| 113 |
+
|
| 114 |
+
Each utterance is recorded in:
|
| 115 |
+
1. **Baseline/Neutral** (control sample)
|
| 116 |
+
2. **Trust** (Trus) - conveying reliability and credibility
|
| 117 |
+
3. **Attention** (Att) - directing focus and engagement
|
| 118 |
+
4. **Reciprocity** (Reci) - expressing mutual exchange
|
| 119 |
+
5. **Empathy Resonance** (Emre) - demonstrating empathetic connection
|
| 120 |
+
6. **Cognitive Energy** (Cogen) - showing mental engagement
|
| 121 |
+
7. **Sub-modified variants** (e.g., Trust + Calm)
|
| 122 |
+
8. **Ambivalence variants** (optional cross-intent complexity)
|
| 123 |
+
|
| 124 |
+
This design enables:
|
| 125 |
+
- **Differential Latent Analysis (DLA)**: Isolate prosodic features while holding lexical content constant
|
| 126 |
+
- **Contrastive learning**: Compare prosodic variations across identical text
|
| 127 |
+
- **Intent vector extraction**: Model functional intent as steerable features
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## Controlled Semantic Design
|
| 132 |
+
|
| 133 |
+
### Functional Tonal Intents (Not Emotions)
|
| 134 |
+
|
| 135 |
+
TonalityPrint distinguishes between **functional intent** and **affective state**:
|
| 136 |
+
|
| 137 |
+
| Functional Intent | What It Does | Not The Same As |
|
| 138 |
+
|------------------|--------------|-----------------|
|
| 139 |
+
| **Trust** | Establishes credibility, reliability | "Happiness" or "Confidence" |
|
| 140 |
+
| **Attention** | Directs focus, maintains engagement | "Excitement" or "Urgency" |
|
| 141 |
+
| **Reciprocity** | Invites response, balances exchange | "Friendliness" or "Agreement" |
|
| 142 |
+
| **Empathy Resonance** | Attunes to listener state | "Sympathy" or "Sadness" |
|
| 143 |
+
| **Cognitive Energy** | Signals mental activation | "Enthusiasm" or "Anxiety" |
|
| 144 |
+
|
| 145 |
+
**Why This Matters:**
|
| 146 |
+
- Traditional emotion datasets label *what speakers feel*
|
| 147 |
+
- TonalityPrint annotates *what speakers do with their voice*
|
| 148 |
+
- This functional framing aligns with conversational AI goals
|
| 149 |
+
|
| 150 |
+
### Ambivalence as Feature (Not Noise)
|
| 151 |
+
|
| 152 |
+
Unlike traditional datasets that discard mixed signals as annotation errors, TonalityPrint systematically annotates **ambivalence** (`ambivalex`) as:
|
| 153 |
+
- A perceptual entropy transitional state
|
| 154 |
+
- A cross-intent feature where competing tonal cues co-occur
|
| 155 |
+
- An essential signal for real-world inference-time alignment
|
| 156 |
+
|
| 157 |
+
**Example Applications:**
|
| 158 |
+
- Detecting when AI should express uncertainty
|
| 159 |
+
- Modeling tonal complexity in high-stakes interactions
|
| 160 |
+
- Training systems to navigate mixed emotional states
|
| 161 |
+
|
| 162 |
+
---
|
| 163 |
+
|
| 164 |
+
## Annotation Methodology
|
| 165 |
+
|
| 166 |
+
### Expert Practitioner Annotation
|
| 167 |
+
|
| 168 |
+
**Annotator:** Ronda Polhill (speaker and dataset creator)
|
| 169 |
+
**Method:** Expert perceptual assessment combined with acoustic analysis
|
| 170 |
+
**Expertise:** 8,873+ high-stakes customer interactions (observational context, not causal proof)
|
| 171 |
+
|
| 172 |
+
### Continuous Indices (0-100 Scale)
|
| 173 |
+
|
| 174 |
+
Each utterance includes five tonality indices:
|
| 175 |
+
|
| 176 |
+
| Index | Abbreviation | Interpretation |
|
| 177 |
+
|-------|--------------|----------------|
|
| 178 |
+
| **Trust** | TR | 0-30: Low/Minimal, 31-60: Moderate, 61-85: High, 86-100: Very High |
|
| 179 |
+
| **Attention** | AT | Perceptual score of attentional focus |
|
| 180 |
+
| **Reciprocity** | RE | Perceptual score of collaborative tone |
|
| 181 |
+
| **Empathy Resonance** | ER | Perceptual score of empathetic attunement |
|
| 182 |
+
| **Cognitive Energy** | CE | Perceptual score of mental activation |
|
| 183 |
+
|
| 184 |
+
**Important:** These are annotator perceptual scores, not empirically validated scales.
|
| 185 |
+
|
| 186 |
+
### Quality Control
|
| 187 |
+
|
| 188 |
+
- **Proprietary heuristic audit**: ~80+% acoustic-intent alignment verified
|
| 189 |
+
- **Re-recording rate**: ~18.05% of corpus re-recorded for consistency
|
| 190 |
+
- **Known bias**: Cognitive Energy shows systematic elevation (documented and retained)
|
| 191 |
+
|
| 192 |
+
---
|
| 193 |
+
|
| 194 |
+
## Intended Use
|
| 195 |
+
|
| 196 |
+
### Primary Applications
|
| 197 |
+
|
| 198 |
+
1. **Inference-Time Prosodic Alignment**
|
| 199 |
+
- Fine-tuning reasoning-based voice models
|
| 200 |
+
- Aligning model confidence with vocal uncertainty
|
| 201 |
+
- Calibrating trust signals in AI responses
|
| 202 |
+
|
| 203 |
+
2. **Differential Latent Analysis**
|
| 204 |
+
- Extracting tonal intent vectors (analogous to LLM activation steering)
|
| 205 |
+
- Contrastive learning with fixed lexical content
|
| 206 |
+
- Isolating prosodic features from semantic content
|
| 207 |
+
|
| 208 |
+
3. **Ambivalence-Aware Systems**
|
| 209 |
+
- Training dialogue systems to detect mixed signals
|
| 210 |
+
- Modeling uncertainty in safety-critical applications
|
| 211 |
+
- Navigating tonal complexity in nuanced interactions
|
| 212 |
+
|
| 213 |
+
4. **Style-Conditioned Synthesis**
|
| 214 |
+
- Controlling prosodic style in TTS systems
|
| 215 |
+
- Evaluating voice quality metrics
|
| 216 |
+
- Transfer learning for expressive speech
|
| 217 |
+
|
| 218 |
+
5. **Human-AI Voice Calibration**
|
| 219 |
+
- Investigating "AI-adjacent yet trusted" vocal profiles
|
| 220 |
+
- Studying uncanny valley effects in voice
|
| 221 |
+
- Exploring voice-appearance synchrony in embodied AI
|
| 222 |
+
|
| 223 |
+
### Appropriate Use Cases
|
| 224 |
+
|
| 225 |
+
- Academic research on prosody and speech synthesis
|
| 226 |
+
- Architectural development for voice AI systems
|
| 227 |
+
- Feature extraction and transfer learning experiments
|
| 228 |
+
- Controlled validation studies
|
| 229 |
+
- Exploratory analysis of functional tonal intent
|
| 230 |
+
|
| 231 |
+
### Non-Intended Uses
|
| 232 |
+
|
| 233 |
+
- **Do NOT use for:**
|
| 234 |
+
- Population-level emotion recognition (single speaker only)
|
| 235 |
+
- Production deployment without multi-speaker validation
|
| 236 |
+
- Creating unauthorized voice clones or deepfakes of the speaker
|
| 237 |
+
- Commercial applications without licensing (CC BY-NC 4.0)
|
| 238 |
+
- Generalizing findings beyond this specific speaker profile
|
| 239 |
+
|
| 240 |
+
---
|
| 241 |
+
|
| 242 |
+
## Known Biases and Limitations
|
| 243 |
+
|
| 244 |
+
### Single-Speaker Constraint
|
| 245 |
+
|
| 246 |
+
- **All 144 files from same speaker** (Ronda Polhill)
|
| 247 |
+
- Findings may not generalize across:
|
| 248 |
+
- Genders
|
| 249 |
+
- Ages
|
| 250 |
+
- Accents
|
| 251 |
+
- Cultures
|
| 252 |
+
- Languages
|
| 253 |
+
- Multi-speaker validation required for broader applicability
|
| 254 |
+
|
| 255 |
+
### Cognitive Energy Systematic Bias
|
| 256 |
+
|
| 257 |
+
**Known Issue:** Cognitive Energy Index shows systematic elevation across corpus.
|
| 258 |
+
|
| 259 |
+
**Possible Causes:**
|
| 260 |
+
- Speaker's natural ecological style (high-energy delivery)
|
| 261 |
+
- Lexical content effects
|
| 262 |
+
- Practitioner annotation bias
|
| 263 |
+
|
| 264 |
+
**Resolution:** Intentionally retained for transparency. Researchers should account for this bias in analyses.
|
| 265 |
+
|
| 266 |
+
**Impact:**
|
| 267 |
+
- May affect Trust and Empathy Resonance indices
|
| 268 |
+
- Suggests need for speaker-specific normalization
|
| 269 |
+
- Does not invalidate other tonality measures
|
| 270 |
+
|
| 271 |
+
### Controlled Environment
|
| 272 |
+
|
| 273 |
+
- Professional studio recordings (not naturalistic)
|
| 274 |
+
- Scripted content (not spontaneous speech)
|
| 275 |
+
- May not reflect real-world acoustic conditions
|
| 276 |
+
- Single recording period (Dec 2025 - Jan 2026)
|
| 277 |
+
|
| 278 |
+
### Observational Context (Not Causal Proof)
|
| 279 |
+
|
| 280 |
+
The annotation methodology references 8,873+ customer interactions with observed correlations:
|
| 281 |
+
- ~35.85% average conversion rate (observational metric)
|
| 282 |
+
- 68 spontaneous reports of "AI-adjacent" voice quality with high trust ratings
|
| 283 |
+
|
| 284 |
+
**Critical Caveat:** These are observational correlations, not causal relationships. Multiple confounding variables present. Provided as hypothesis-generating context only.
|
| 285 |
+
|
| 286 |
+
### Annotation Subjectivity
|
| 287 |
+
|
| 288 |
+
- Continuous indices are perceptual scores, not validated scales
|
| 289 |
+
- Single annotator (no inter-rater reliability)
|
| 290 |
+
- Ambivalence definitions may require field-specific interpretation
|
| 291 |
+
|
| 292 |
+
---
|
| 293 |
+
|
| 294 |
+
## Ethical Considerations
|
| 295 |
+
|
| 296 |
+
### Speaker Consent and Biometric Integrity
|
| 297 |
+
|
| 298 |
+
- **100% human recordings** by author (Ronda Polhill)
|
| 299 |
+
- Explicit informed consent for recording, annotation, and public release
|
| 300 |
+
- No synthetic voices, clones, or generative AI audio
|
| 301 |
+
- Speaker demographics: Mid-life female, native English speaker
|
| 302 |
+
|
| 303 |
+
### Prohibited Uses
|
| 304 |
+
|
| 305 |
+
**Researchers are strictly prohibited from:**
|
| 306 |
+
- Creating unauthorized voice clones of the speaker
|
| 307 |
+
- Generating deepfakes using this dataset
|
| 308 |
+
- Using recordings for deceptive purposes
|
| 309 |
+
- Violating CC BY-NC 4.0 license terms
|
| 310 |
+
|
| 311 |
+
### Responsible Use Guidelines
|
| 312 |
+
|
| 313 |
+
- Acknowledge single-speaker limitation in publications
|
| 314 |
+
- Do not make population-level claims
|
| 315 |
+
- Report systematic biases when using dataset
|
| 316 |
+
- Obtain commercial license for non-academic use
|
| 317 |
+
- Cite dataset properly (see [Citation](#citation))
|
| 318 |
+
|
| 319 |
+
---
|
| 320 |
+
|
| 321 |
+
## Quick Start
|
| 322 |
+
|
| 323 |
+
### 1. Download Dataset
|
| 324 |
+
|
| 325 |
+
```bash
|
| 326 |
+
# Download from Zenodo
|
| 327 |
+
wget https://zenodo.org/records/17913895/files/DATACARD.zip
|
| 328 |
+
unzip DATACARD.zip
|
| 329 |
+
```
|
| 330 |
+
|
| 331 |
+
### 2. Load Annotations (Python)
|
| 332 |
+
|
| 333 |
+
```python
|
| 334 |
+
import pandas as pd
|
| 335 |
+
import json
|
| 336 |
+
|
| 337 |
+
# Load combined CSV
|
| 338 |
+
df = pd.read_csv('annotations/ALL_TONALITY_DATA_COMBINED.csv')
|
| 339 |
+
|
| 340 |
+
# Parse segment-level data
|
| 341 |
+
df['Segments_Parsed'] = df['Segments'].apply(json.loads)
|
| 342 |
+
|
| 343 |
+
# Filter by intention
|
| 344 |
+
trust_samples = df[df['Primary_Intention'] == 'Trust']
|
| 345 |
+
ambivalent_samples = df[df['Ambivalex'] == 'ambivalex']
|
| 346 |
+
```
|
| 347 |
+
|
| 348 |
+
### 3. Access Audio Files
|
| 349 |
+
|
| 350 |
+
```python
|
| 351 |
+
import librosa
|
| 352 |
+
|
| 353 |
+
# Load audio file
|
| 354 |
+
audio_path = 'audio/TPV1_B1_UTT1_S_Att_SP-Ronda.wav'
|
| 355 |
+
audio, sr = librosa.load(audio_path, sr=48000, mono=True)
|
| 356 |
+
|
| 357 |
+
# Extract features
|
| 358 |
+
mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13)
|
| 359 |
+
```
|
| 360 |
+
|
| 361 |
+
### 4. Explore Tonality Indices
|
| 362 |
+
|
| 363 |
+
```python
|
| 364 |
+
# Compare Trust scores across utterances
|
| 365 |
+
trust_scores = df.groupby('Utterance_Number')['Trust_Index'].mean()
|
| 366 |
+
|
| 367 |
+
# Analyze Cognitive Energy bias
|
| 368 |
+
ce_by_intent = df.groupby('Primary_Intention')['Cognitive_Energy_Index'].describe()
|
| 369 |
+
```
|
| 370 |
+
|
| 371 |
+
---
|
| 372 |
+
|
| 373 |
+
## File Structure Summary
|
| 374 |
+
|
| 375 |
+
### Documentation Files
|
| 376 |
+
|
| 377 |
+
| File | Description |
|
| 378 |
+
|------|-------------|
|
| 379 |
+
| `README.md` | This file - Dataset overview and usage |
|
| 380 |
+
| `DATASET_CARD.md` | Comprehensive ML dataset card |
|
| 381 |
+
| `CODEBOOK.md` | Variable definitions and file naming |
|
| 382 |
+
| `METHODOLOGY.md` | Data collection and annotation procedures |
|
| 383 |
+
| `CITATION.cff` | Machine-readable citation metadata |
|
| 384 |
+
| `LICENSE` | CC BY-NC 4.0 license text |
|
| 385 |
+
| `ETHICAL_USE_AND_LIMITATIONS.md` | Ethical guidelines and constraints |
|
| 386 |
+
| `QUICK_START.txt` | 4-step quick start guide |
|
| 387 |
+
| `MANIFEST.txt` | Complete file inventory |
|
| 388 |
+
|
| 389 |
+
### Annotation Files (289 total)
|
| 390 |
+
|
| 391 |
+
- **144 JSON files**: Original annotations with full metadata
|
| 392 |
+
- **144 CSV files**: Tabular format (23 columns)
|
| 393 |
+
- **1 combined CSV**: `ALL_TONALITY_DATA_COMBINED.csv` (all 144 rows)
|
| 394 |
+
|
| 395 |
+
### Audio Files (144 total)
|
| 396 |
+
|
| 397 |
+
**File Naming Convention:**
|
| 398 |
+
```
|
| 399 |
+
TPV1_[Batch]_[Utterance]_[Type]_[Intent]_[Modifier]_[Ambivalex]_SP-Ronda.wav
|
| 400 |
+
```
|
| 401 |
+
|
| 402 |
+
**Examples:**
|
| 403 |
+
- `TPV1_B1_UTT1_S_Att_SP-Ronda.wav` (Single - Attention only)
|
| 404 |
+
- `TPV1_B1_UTT1_S_Reci_affi_SP-Ronda.wav` (Compound - Reciprocity + Affirming)
|
| 405 |
+
- `TPV1_B1_UTT1_S_Reci_affi_ambivalex_SP-Ronda.wav` (Complex - with Ambivalence)
|
| 406 |
+
|
| 407 |
+
---
|
| 408 |
+
|
| 409 |
+
## Citation
|
| 410 |
+
|
| 411 |
+
### BibTeX
|
| 412 |
+
|
| 413 |
+
```bibtex
|
| 414 |
+
@dataset{polhill_2026_tonalityprint,
|
| 415 |
+
author = {Polhill, Ronda},
|
| 416 |
+
title = {TonalityPrint: A Contrast-Structured Voice Dataset
|
| 417 |
+
for Exploring Functional Tonal Intent, Ambivalence,
|
| 418 |
+
and Inference-Time Prosodic Alignment v1.0},
|
| 419 |
+
year = 2026,
|
| 420 |
+
publisher = {Zenodo},
|
| 421 |
+
version = {1.0.0},
|
| 422 |
+
doi = {10.5281/zenodo.17913895},
|
| 423 |
+
url = {https://doi.org/10.5281/zenodo.17913895}
|
| 424 |
+
}
|
| 425 |
+
```
|
| 426 |
+
|
| 427 |
+
### APA
|
| 428 |
+
|
| 429 |
+
Polhill, R. (2026). *TonalityPrint: A Contrast-Structured Voice Dataset for Exploring Functional Tonal Intent, Ambivalence, and Inference-Time Prosodic Alignment v1.0* [Data set]. Zenodo. https://doi.org/10.5281/zenodo.17913895
|
| 430 |
+
|
| 431 |
+
### Related Work
|
| 432 |
+
|
| 433 |
+
**Supplement to:** Polhill, R. (2025). "Tonality as Attention" white paper. Zenodo. https://doi.org/10.5281/zenodo.17410581
|
| 434 |
+
|
| 435 |
+
---
|
| 436 |
+
|
| 437 |
+
## Contact and Licensing
|
| 438 |
+
|
| 439 |
+
**Dataset Curator:** Ronda Polhill
|
| 440 |
+
**Email:** ronda@TonalityPrint.com
|
| 441 |
+
**Website:** https://TonalityPrint.com
|
| 442 |
+
|
| 443 |
+
**License:** CC BY-NC 4.0 (Non-commercial use)
|
| 444 |
+
**Commercial Licensing:** Contact ronda@TonalityPrint.com
|
| 445 |
+
|
| 446 |
+
**For Questions About:**
|
| 447 |
+
- Dataset usage → This README or QUICK_START.txt
|
| 448 |
+
- Variable definitions → CODEBOOK.md
|
| 449 |
+
- Methodology → METHODOLOGY.md
|
| 450 |
+
- Ethical use → ETHICAL_USE_AND_LIMITATIONS.md
|
| 451 |
+
- Technical issues → ronda@TonalityPrint.com
|
| 452 |
+
|
| 453 |
+
---
|
| 454 |
+
|
| 455 |
+
## Version Information
|
| 456 |
+
|
| 457 |
+
**Version:** 1.0.0
|
| 458 |
+
**Release Date:** January 24, 2026
|
| 459 |
+
**DOI:** https://doi.org/10.5281/zenodo.17913895
|
| 460 |
+
**Last Updated:** January 24, 2026
|
| 461 |
+
|
| 462 |
+
---
|
| 463 |
+
|
| 464 |
+
## Acknowledgments
|
| 465 |
+
|
| 466 |
+
This work emerges from independent practitioner-research conducted without institutional funding and is released for academic research use under CC BY-NC 4.0.
|
| 467 |
+
|
| 468 |
+
TonalityPrint aims to address a critical gap in voice AI training data by moving beyond discrete emotion recognition to capture functional tonal intent, including ambivalent prosodic signals as essential nuances for inference-time alignment.
|
| 469 |
+
|
| 470 |
+
---
|
| 471 |
+
|
| 472 |
+
**© 2026 Ronda Polhill | Licensed under CC BY-NC 4.0**
|
continuous_indices.txt
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TonalityPrint Voice Dataset v 1.0
|
| 2 |
+
|
| 3 |
+
Continuous Intensity Rating
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
Primary Functional Tonal Intent Continuous Intensity Rating Guidelines for Trust, Attention, Cognitive Energy, Empathy Resonance and Reciprocity
|
| 7 |
+
* 0-30: Low/Minimal presence of intent quality
|
| 8 |
+
* 31-60: Moderate presence
|
| 9 |
+
* 61-85: High presence
|
| 10 |
+
* 86-100: Very high/exemplary presence
|
| 11 |
+
These are annotator perceptual scores, not empirically validated scales.
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
______________________________________________
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
Primary Functional Tonal Intent Continuous Intensity Rating Name key:
|
| 18 |
+
* Trust: TR
|
| 19 |
+
* Attention: AT
|
| 20 |
+
* Cognitive Energy: CE
|
| 21 |
+
* Empathy Resonance: ER
|
| 22 |
+
* Reciprocity: RE
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
NOTE: Please see Zenodo DOI README for Empirical Grounding & Exploratory Provenance details
|
scripts.txt
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TonalityPrint v1 Voice Dataset
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
Script of 18 Utterances
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
Utterance 1: I want to make sure I understand what you need
|
| 8 |
+
Utterance 2: Just let me know where you’d like to start, and we’ll go from there
|
| 9 |
+
Utterance 3: We can take this one step at a time - whatever works best for you
|
| 10 |
+
Utterance 4: Allow me to walk you through the options we have available
|
| 11 |
+
Utterance 5: That's a great question - here's what I would recommend
|
| 12 |
+
Utterance 6: Would you like me to explain how this works?
|
| 13 |
+
Utterance 7: If now isn’t the best time, I can follow up later. Whatever is easiest for you
|
| 14 |
+
Utterance 8: Just to confirm, we are focusing on planning today. Is that correct?
|
| 15 |
+
Utterance 9: Thank you for sharing that. Let’s take a look at your options together
|
| 16 |
+
Utterance 10: That makes a lot of sense. Let's proceed whenever you're ready
|
| 17 |
+
Utterance 11: I will go ahead and log this in the system for future reference
|
| 18 |
+
Utterance 12: Let’s take a closer look at the details to make sure the systems are synchronized
|
| 19 |
+
Utterance 13: Is there anything else that I can clarify for you?
|
| 20 |
+
Utterance 14: This direction is flexible, and can adjust as your needs evolve
|
| 21 |
+
Utterance 15: Does this option still make sense for you so far?
|
| 22 |
+
Utterance 16: I will help you, but this feels risky
|
| 23 |
+
Utterance 17: Sure, I’m in… unless it all goes wrong
|
| 24 |
+
Utterance 18: I’m excited, but this also may fail
|
speaker_profile.txt
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TonalityPrint v1 Voice Dataset
|
| 2 |
+
Speaker Profile and Demographics
|
| 3 |
+
Speaker Information
|
| 4 |
+
* Age: Mid-life
|
| 5 |
+
* Gender: Female
|
| 6 |
+
* Linguistic Background: Native English speaker with neutral, mobile accent (Northeastern US baseline, influenced by residency in Okinawa, Las Vegas, Seattle, Phoenix)
|
| 7 |
+
* Vocal Characteristics: Noted for balanced dynamic attention range and tonal precision while maintaining human warmth and interpersonal effectiveness
|
| 8 |
+
* Distinctive Quality: The speaker's voice may represent a rare profile bridging computational precision and human relational warmth, potentially making it a useful value for human-AI voice alignment research investigating the ‘activation’ cadence or 'AI-adjacent yet trusted' anomaly
|
| 9 |
+
* Professional Context: During the dataset development period, the speaker maintained customer-facing dialogue work in a high-volume, high-stakes service environment. The speaker’s adaptive tonal modulation correlated, not causal with top-tier performance metrics and spontaneous episodes of listeners describing the speaker's voice tonality as 'AI-adjacent' while simultaneously rating interactions as highly positive (Generative Hypothesis, Not Causal Proof- These observations emerge from naturalistic practice and are presented as hypothesis-generating rather than hypothesis-confirming. The documented associations between tonal patterns and outcomes warrant controlled investigation but should not be interpreted as established causal relationships.
|
| 10 |
+
as described in Empirical Grounding & Exploratory Provenance).
|
| 11 |
+
|
| 12 |
+
NOTE: Please see Zenodo DOI README for Empirical Grounding & Exploratory Provenance details
|
tech_specs.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TonalityPrint v1 Voice Dataset
|
| 2 |
+
Acoustic Specifications
|
| 3 |
+
Technical Audio Recording Information
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
* Recording Equipment: Audacity and Yeti Blue microphone (cardioid mode, ~6-8” distance)
|
| 11 |
+
* Format: Audacity 48kHz/32-bit float WAV (mono). Audio captured with real-time effects disabled in Audacity to preserve original voice tonality signal. All recordings use consistent preset settings.
|
| 12 |
+
* Recording Environment: Controlled home studio with minimal ambient noise, speaker seated, consistent conditions across all recordings
|
| 13 |
+
* No Post-Processing: To preserve the variance of 100% human tonality, this dataset intentionally provides raw, unprocessed audio files without post-processing (e.g., noise reduction, normalization, filtering or EQ). This approach may include minimal background noise so as not to alter nuanced vocal tonality in an effort to support maximum fidelity specifically designed to analyze micro-tonal expression. No other transformative effects used.
|
| 14 |
+
* Total Files: 144 audio samples
|
| 15 |
+
* Duration Range: Approximately 3-6 seconds per audio sample
|
| 16 |
+
* Total Duration: Approximately 11 minutes 5 seconds
|
| 17 |
+
* Recording Dates: December 2025-January 2026
|
transcripts.txt
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TonalityPrint v1 Voice Dataset
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
Transcript of 18 Utterances
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
Utterance 1: I want to make sure I understand what you need
|
| 8 |
+
Utterance 2: Just let me know where you would like to start, and we’ll go from there
|
| 9 |
+
Utterance 3: We can take this one step at a time - whatever works best for you
|
| 10 |
+
Utterance 4: Allow me to walk you through the options we have available
|
| 11 |
+
Utterance 5: That's a great question - here's what I would recommend
|
| 12 |
+
Utterance 6: Would you like me to explain how this works?
|
| 13 |
+
Utterance 7: If now isn’t the best time, I can follow up later. Whatever is easiest for you
|
| 14 |
+
Utterance 8: Just to confirm, we are focusing on planning today. Is that correct?
|
| 15 |
+
Utterance 9: Thank you for sharing that. Let’s take a look at your options together
|
| 16 |
+
Utterance 10: That makes a lot of sense. Let's proceed whenever you're ready
|
| 17 |
+
Utterance 11: I will go ahead and log this in the system for future reference
|
| 18 |
+
Utterance 12: Let’s take a closer look at the details to make sure the systems are synchronized
|
| 19 |
+
Utterance 13: Is there anything else that I can clarify for you?
|
| 20 |
+
Utterance 14: This direction is flexible, and can adjust as your needs evolve
|
| 21 |
+
Utterance 15: Does this option still make sense for you so far?
|
| 22 |
+
Utterance 16: I will help you, but this feels risky
|
| 23 |
+
Utterance 17: Sure, I’m in… unless it all goes wrong
|
| 24 |
+
Utterance 18: I’m excited, but this also may fail
|