PhoenixAxis commited on
Commit
a8d3c06
·
verified ·
1 Parent(s): 50840df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ configs:
12
  ---
13
 
14
  # Personal Hub: Exploring High-Expressiveness Speech Data through Spatio-Temporal Feature Integration and Model Fine-Tuning
15
-
16
  # Introduction
17
  In this work, we present Personal Hub, a novel framework for mining and utilizing high-expressivity speech data by integrating spatio-temporal context with combinatorial attribute control. At the core of our approach lies a Speech Attribute Matrix, which enables annotators to systematically combine speaker-related features such as age, gender, emotion, accent, and environment with temporal metadata, to curate speech samples with varied and rich expressive characteristics.
18
  Based on this matrix-driven data collection paradigm, we construct a multi-level expressivity dataset, categorized into three tiers according to the diversity and complexity of attribute combinations. We then investigate the benefits of this curated data through two lines of model fine-tuning: (1) automatic speech recognition (ASR) models, where we demonstrate that incorporating high-expressivity data accelerates convergence and enhances learned acoustic representations, and (2) large end-to-end speech models, where both human and model-based evaluations reveal improved interactional and expressive capabilities post-finetuning.Our results underscore the potential of high-expressivity speech datasets in enhancing both task-specific performance and the overall communicative competence of speech AI systems.
 
12
  ---
13
 
14
  # Personal Hub: Exploring High-Expressiveness Speech Data through Spatio-Temporal Feature Integration and Model Fine-Tuning
15
+ ![Logo](personal.drawio.svg)
16
  # Introduction
17
  In this work, we present Personal Hub, a novel framework for mining and utilizing high-expressivity speech data by integrating spatio-temporal context with combinatorial attribute control. At the core of our approach lies a Speech Attribute Matrix, which enables annotators to systematically combine speaker-related features such as age, gender, emotion, accent, and environment with temporal metadata, to curate speech samples with varied and rich expressive characteristics.
18
  Based on this matrix-driven data collection paradigm, we construct a multi-level expressivity dataset, categorized into three tiers according to the diversity and complexity of attribute combinations. We then investigate the benefits of this curated data through two lines of model fine-tuning: (1) automatic speech recognition (ASR) models, where we demonstrate that incorporating high-expressivity data accelerates convergence and enhances learned acoustic representations, and (2) large end-to-end speech models, where both human and model-based evaluations reveal improved interactional and expressive capabilities post-finetuning.Our results underscore the potential of high-expressivity speech datasets in enhancing both task-specific performance and the overall communicative competence of speech AI systems.