Face Capture

Effects & Audio

Facial animation capture: ARKit, MetaHuman Animator, Live Link streaming, runtime lip sync, and audio-driven lip sync systems.

Version: 5.0.0 – 5.7.3 | Invoke: /skill face-capture

← All Skills

What This Skill Does

Face Capture helps you drive facial animation on UE5 characters: particularly MetaHumans: using capture devices, Live Link streaming, and runtime lip sync systems. This skill covers the complete facial animation pipeline, from iPhone ARKit-based real-time capture through the Live Link Face app, to offline video-based capture with MetaHuman Animator, and audio-driven lip sync using OVR Lip Sync for viseme generation. You will learn how to map ARKit blend shapes to morph targets, record performances into Sequencer, calibrate neutral poses, and set up custom face mapping for non-MetaHuman characters.

Covers

  • Live Link Face app setup for ARKit-based iPhone and iPad face capture
  • Live Link plugin configuration and source connection
  • ARKit blend shape mapping to UE5 morph targets (52 standard shapes)
  • MetaHuman Animator workflow for video-based facial capture
  • Audio-driven lip sync using OVR Lip Sync and runtime viseme analysis
  • Recording and playback of face capture data in Sequencer
  • Calibration: neutral pose, range of motion, and retargeting
  • Custom character face mapping for non-MetaHuman rigs

Does Not Cover

  • MetaHuman import and LOD setup → MetaHuman Setup
  • Body motion capture and IK retargeting → Control Rig
  • Audio track placement in Sequencer → Audio Sequencer
  • Custom morph target creation → External DCC tools

How to Use

Invoke this skill in Claude Code:

/skill face-capture

This skill is also auto-detected when your prompt mentions face capture, facial animation, ARKit, Live Link Face, motion capture face, lip sync, blend shapes, or morph target capture intent. AgentUX will automatically activate Face Capture when it recognizes relevant context in your request.

Key Unreal Engine Concepts

Concept Description
ULiveLinkComponentA component that binds an actor to a Live Link subject for real-time data application from external sources.
ARKit Blend ShapesApple's 52 standard facial expressions captured via TrueDepth camera, providing values from 0.0 to 1.0 per shape.
MetaHuman AnimatorAn offline tool that analyzes video footage and extracts high-quality facial performance data mapped to MetaHuman morph targets.
OVR Lip SyncA real-time audio analysis library that converts speech audio into viseme weights for driving lip animation.
VisemeA visual representation of a speech sound: mouth shapes like PP (p/b/m), aa (a), oh (o) that map to morph targets.
Take RecorderA recording tool that captures Live Link face data as animation sequences for later use in Sequencer.

Related Skills

What You'll Learn

  • How to set up real-time face capture using an iPhone and the Live Link Face app
  • How to use MetaHuman Animator for high-quality offline video-based capture
  • How to generate lip sync from audio using OVR Lip Sync and viseme mapping
  • How to record face capture performances and edit them in Sequencer
  • How to calibrate neutral pose and range of motion for accurate results
  • How to map ARKit blend shapes to custom character morph targets