Unity-NorthStar / data /Assets /Tutorial /pages /Lip Sync.asset
introvoyz041's picture
Migrated from GitHub
d883ffe verified
%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!114 &11400000
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 0}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 4510294d23d964fe59443526f1ca7c4b, type: 3}
m_Name: Lip Sync
m_EditorClassIdentifier:
m_displayName: Lip Sync
m_hierarchyName: Lip Sync
m_context: {fileID: 11400000, guid: ccf5a8acd0168a6498a97e53cfd28d6a, type: 2}
m_markdownFile: {fileID: 0}
m_priority: 1012
m_overrideMarkdownText: "# North Star\u2019s Implementation of ULipSync\n\nTo support
fully voiced NPC dialogue, NorthStar needed an efficient lip-syncing solution
that minimized reliance on animators. The team chose [ULipSync](https://github.com/hecomi/uLipSync)
for its familiarity, strong narrative control, customization, and ease of use.\n\n##
ULipSync Setup\n\nULipSync processes lip-sync data in three ways:\n\n1. **Runtime
processing** \u2013 Analyzes audio dynamically.\n2. **Baking into scriptable
objects** \u2013 Stores data for reuse.\n3. **Baking into animation clips** \u2013
Prepares animations for timeline use.\n\nDue to CPU constraints and narrative
timelines, baking data into animation clips was the best approach.\n\n![](./Images/LipSync/Fig2.png)\n\n##
Phoneme Sampling & Viseme Groups\n\nULipSync maps phonemes (smallest speech components)
to viseme groups (facial animation controls).\n\n- English has **44 phonemes**,
but not all are needed for lip-syncing.\n- **Plosive sounds** (e.g., \"P\" or
\"K\") are hard to calibrate and may not impact the final animation significantly.\n-
Stylized models need fewer viseme groups than realistic ones, sometimes only
vowels.\n\nTo ensure flexibility, we recorded all 44 phonemes for each voice
actor, allowing system refinement later.\n\n![](./Images/LipSync/Fig0.png)\n\n##
Challenges in Phoneme Sampling\n\nNot all phonemes were sampled perfectly. Issues
included:\n\n- Regression effects, where certain phonemes worsened results.\n-
Lack of matching viseme groups, making some phonemes irrelevant.\n- Volume inconsistencies,
causing some sounds to be too quiet for accurate sampling.\n\nTo improve accuracy,
we documented problematic phonemes for future improvements and considered additional
recordings.\n\n## Ensuring Realistic Lip-Sync\n\nAutomated lip-syncing often
results in excessive mouth openness. Realistic speech involves frequent mouth
closures. To address this:\n\n- We referenced real-life speech patterns.\n- Animators
provided feedback to refine mouth movement accuracy.\n\n## Final Implementation\n\nEach
voice line was baked with a pre-calibrated sample array, storing blend shape
weights per NPC. This per-character approach worked due to a limited NPC count,
but a more generalized system is needed for larger projects.\n\n![](./Images/LipSync/Fig1.png)\n\n###
Relevant Files\n\n- [NpcController.cs](../Assets/NorthStar/Scripts/NPC/NpcController.cs)\n-
[NpcRigController.cs](../Assets/NorthStar/Scripts/NPC/NpcRigController.cs)\n"
m_overrideMarkdownRoot: .\Documentation/