PhoenixAxis commited on
Commit
83c7de1
·
verified ·
1 Parent(s): a93d6b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -8,14 +8,12 @@ configs:
8
  - split: only_gender_reliable
9
  path: metadata.csv
10
  ---
11
- # Introduction
12
- ## Motivation
13
-
14
- Our objective is to initiate a project that utilizes a hybrid verification methodology—incorporating both automated processes and manual auditing—to curate and refine existing datasets, thereby constructing a premium-quality reference audio corpus for Text-to-Speech (TTS) system development.
15
 
16
- ## Progress and Plans
17
 
18
- We have completed initial collection and filtering of existing datasets, with planned expansion through additional data acquisition to enhance dataset diversity and coverage.
 
 
19
 
20
  # Method
21
 
 
8
  - split: only_gender_reliable
9
  path: metadata.csv
10
  ---
 
 
 
 
11
 
12
+ # Personal Hub: Exploring High-Expressiveness Speech Data through Spatio-Temporal Feature Integration and Model Fine-Tuning
13
 
14
+ # Introduction
15
+ In this work, we present Personal Hub, a novel framework for mining and utilizing high-expressivity speech data by integrating spatio-temporal context with combinatorial attribute control. At the core of our approach lies a Speech Attribute Matrix, which enables annotators to systematically combine speaker-related features such as age, gender, emotion, accent, and environment with temporal metadata, to curate speech samples with varied and rich expressive characteristics.
16
+ Based on this matrix-driven data collection paradigm, we construct a multi-level expressivity dataset, categorized into three tiers according to the diversity and complexity of attribute combinations. We then investigate the benefits of this curated data through two lines of model fine-tuning: (1) automatic speech recognition (ASR) models, where we demonstrate that incorporating high-expressivity data accelerates convergence and enhances learned acoustic representations, and (2) large end-to-end speech models, where both human and model-based evaluations reveal improved interactional and expressive capabilities post-finetuning.Our results underscore the potential of high-expressivity speech datasets in enhancing both task-specific performance and the overall communicative competence of speech AI systems.
17
 
18
  # Method
19