MetaHuman Animation Pipeline
Advanced 45 minBuild a complete MetaHuman animation pipeline: import and configure a MetaHuman, retarget third-party animations, set up motion matching, rig with Control Rig, and capture facial performance.
Prerequisites
This lesson uses 6 skills spanning the Animation & Motion cluster. You should have a MetaHuman downloaded from MetaHuman Creator and imported into your project via Quixel Bridge.
What You'll Build
A MetaHuman character with a full animation pipeline: body animations retargeted from a Mixamo library, responsive locomotion via motion matching, hand and finger adjustments through Control Rig, and facial performance captured from your iPhone. This is the pipeline used for cinematic characters, virtual humans in architectural visualization, and game protagonists.
Step-by-Step Workflow
Step 1: Import and configure the MetaHuman
/skill metahuman-setup Tell Claude:
"I have a MetaHuman imported via Quixel Bridge. Set it up for gameplay use. Configure LOD levels for real-time performance, enable body and face components, and place it in my level. I want LOD 1 for gameplay (not cinematic LOD 0)." What to expect: Claude configures your MetaHuman Blueprint for real-time use, setting appropriate LOD levels for body and face, enabling the Groom (hair) component with performance settings, and ensuring the skeleton is ready for animation. The character appears in your level at playable quality.
Connection to next step: The MetaHuman has a skeleton but no animations. You need motion data.
Step 2: Build a motion library
/skill metahuman-motion-library Tell Claude:
"Import a Mixamo locomotion pack: idle, walk, run, and jump animations. Retarget them onto my MetaHuman using the IK Retargeter. Make sure the feet do not slide and the hand positions look natural after retargeting." What to expect: Claude imports the Mixamo FBX files, creates an IK Rig for the Mixamo skeleton, sets up the IK Retargeter mapping to the MetaHuman skeleton, and batch-retargets all animations. The result is a library of locomotion animations that play correctly on your MetaHuman's proportions.
Step 3: Set up motion matching
/skill metahuman-motion-matching Tell Claude:
"Create a Pose Search database from my retargeted locomotion animations. Configure it for responsive character movement. The MetaHuman should blend seamlessly between idle, walk, and run based on movement speed. Use trajectory matching for directional changes." What to expect: Claude creates a PoseSearchDatabase asset, adds your retargeted animations with appropriate tags, configures pose and trajectory features for matching, and creates the PoseSearchSchema. The result is a motion matching system that automatically selects the best animation pose based on the character's current state and intended movement.
Step 4: Refine with Control Rig
/skill control-rig Tell Claude:
"Create a Control Rig for the MetaHuman to add procedural adjustments on top of the motion-matched animation. I need: a look-at solver so the head tracks a target, IK foot placement so feet adapt to uneven ground, and hand IK so I can adjust hand positions in Sequencer." What to expect: Claude creates a Control Rig Blueprint with nodes for aim/look-at (head and eyes track a world-space target), foot IK (two-bone IK solvers on each leg with ground traces), and hand IK controls exposed to Sequencer. The rig layers on top of the motion matching output, adding procedural polish.
Step 5: Set up the Animation Blueprint
/skill animation-playback Tell Claude:
"Create an Animation Blueprint for the MetaHuman that uses the Pose Search node for locomotion, layers the Control Rig on top, and supports Montage playback for one-off actions like waving or picking up objects. Add a slot for upper-body overrides." What to expect: Claude creates an AnimBlueprint with a motion matching node feeding into a Control Rig evaluation node, a Montage slot for full-body actions, and a layered blend per bone for upper-body overrides. The character now moves responsively and can play contextual animations on top.
Step 6: Capture facial performance
/skill face-capture Tell Claude:
"Set up Live Link with my iPhone for facial capture. I want to capture a performance for a dialogue scene. Map the ARKit blend shapes to the MetaHuman face, record a 30-second take, and bake it into a Level Sequence." What to expect: Claude configures Live Link Face, maps ARKit's 52 blend shapes to the MetaHuman's facial rig, starts a recording session, and bakes the captured performance into a Sequencer track. Your MetaHuman now performs facial expressions captured from your actual face movements.
Tips & Best Practices
- Retarget before motion matching. Your Pose Search database needs animations on the correct skeleton. Always retarget first, verify the results look good, then build the database.
- Use LOD 1 for gameplay, LOD 0 for cinematics. Cinematic LOD renders every pore and hair strand but costs too much for real-time. Switch LODs based on context.
- Test motion matching with simple movement first. Walk forward, turn, stop. If these basic transitions feel smooth, the system is working. Add complexity (strafing, jumping) after the basics are solid.
- Capture facial performance in a quiet room. ARKit tracking degrades with uneven lighting or glasses. Remove eyewear, face a window for even light, and keep the phone at arm's length.
- Layer, do not replace. Control Rig adjustments layer on top of motion matching. Do not try to do everything in one system. Let each layer handle its specialty.
Next Steps
Building a Cinematic Scene
Intermediate: Use your MetaHuman in a cinematic with cameras and lighting
Performance Tuning & Shipping
Advanced: Optimize your MetaHuman project for distribution
Skills used in this lesson
Cluster: Animation & Motion