Face Capture
Effects & AudioFacial animation capture: ARKit, MetaHuman Animator, Live Link streaming, runtime lip sync, and audio-driven lip sync systems.
/skill face-capture What This Skill Does
Face Capture helps you drive facial animation on UE5 characters: particularly MetaHumans: using capture devices, Live Link streaming, and runtime lip sync systems. This skill covers the complete facial animation pipeline, from iPhone ARKit-based real-time capture through the Live Link Face app, to offline video-based capture with MetaHuman Animator, and audio-driven lip sync using OVR Lip Sync for viseme generation. You will learn how to map ARKit blend shapes to morph targets, record performances into Sequencer, calibrate neutral poses, and set up custom face mapping for non-MetaHuman characters.
Covers
- Live Link Face app setup for ARKit-based iPhone and iPad face capture
- Live Link plugin configuration and source connection
- ARKit blend shape mapping to UE5 morph targets (52 standard shapes)
- MetaHuman Animator workflow for video-based facial capture
- Audio-driven lip sync using OVR Lip Sync and runtime viseme analysis
- Recording and playback of face capture data in Sequencer
- Calibration: neutral pose, range of motion, and retargeting
- Custom character face mapping for non-MetaHuman rigs
Does Not Cover
- MetaHuman import and LOD setup → MetaHuman Setup
- Body motion capture and IK retargeting → Control Rig
- Audio track placement in Sequencer → Audio Sequencer
- Custom morph target creation → External DCC tools
How to Use
Invoke this skill in Claude Code:
/skill face-capture This skill is also auto-detected when your prompt mentions face capture, facial animation, ARKit, Live Link Face, motion capture face, lip sync, blend shapes, or morph target capture intent. AgentUX will automatically activate Face Capture when it recognizes relevant context in your request.
Key Unreal Engine Concepts
| Concept | Description |
|---|---|
ULiveLinkComponent | A component that binds an actor to a Live Link subject for real-time data application from external sources. |
ARKit Blend Shapes | Apple's 52 standard facial expressions captured via TrueDepth camera, providing values from 0.0 to 1.0 per shape. |
MetaHuman Animator | An offline tool that analyzes video footage and extracts high-quality facial performance data mapped to MetaHuman morph targets. |
OVR Lip Sync | A real-time audio analysis library that converts speech audio into viseme weights for driving lip animation. |
Viseme | A visual representation of a speech sound: mouth shapes like PP (p/b/m), aa (a), oh (o) that map to morph targets. |
Take Recorder | A recording tool that captures Live Link face data as animation sequences for later use in Sequencer. |
Related Skills
metahuman-setup
Import and configure MetaHuman characters with full face rigs
control-rig
Rig-based animation controls including face rig layers and corrections
animation-playback
Play and layer face animation sequences from capture recordings
sequencer-basics
Place and edit face capture recordings on Sequencer timelines
What You'll Learn
- How to set up real-time face capture using an iPhone and the Live Link Face app
- How to use MetaHuman Animator for high-quality offline video-based capture
- How to generate lip sync from audio using OVR Lip Sync and viseme mapping
- How to record face capture performances and edit them in Sequencer
- How to calibrate neutral pose and range of motion for accurate results
- How to map ARKit blend shapes to custom character morph targets