Audio Sequencer
Effects & AudioAudio in Sequencer: audio tracks, dialogue alignment, music scoring, spatial audio, and MetaSounds/Sound Cue integration.
/skill audio-sequencer What This Skill Does
Audio Sequencer helps you place and manage audio within UE5's Sequencer for cinematic and interactive use. This skill covers the full audio production workflow in Sequencer, from importing sound assets and placing dialogue tracks aligned to character lip sync, to scoring music across scenes, triggering sound effects at precise frames, and configuring spatial audio for 3D-positioned sounds. You will learn to build layered audio mixes with volume automation, ducking, fading, and submix routing to produce polished soundscapes for your cinematics.
Covers
- Audio tracks in Sequencer: adding, timing, and trimming sound assets
- Dialogue placement aligned to character animation and lip sync
- Music scoring: layering tracks, fading, and cross-fading
- Sound effect triggering via Sequencer event keys and audio sections
- Audio attenuation and spatialization for 3D-positioned sounds
- Volume automation: fade in, fade out, and ducking music under dialogue
- Sound Cue and MetaSound asset integration
- Audio submix routing for organized cinematic mixes
Does Not Cover
- MetaSounds graph authoring → MetaSounds Authoring (future skill)
- Audio middleware (Wwise, FMOD) integration → Audio Middleware (future skill)
- Facial animation and lip sync capture → Face Capture
- Animation playback for syncing body movement to audio → Animation Playback
How to Use
Invoke this skill in Claude Code:
/skill audio-sequencer This skill is also auto-detected when your prompt mentions audio, sound, music, dialogue, voice, SFX, or soundtrack intent in the context of Sequencer. AgentUX will automatically activate Audio Sequencer when it recognizes relevant context in your request.
Key Unreal Engine Concepts
| Concept | Description |
|---|---|
UMovieSceneAudioTrack | A Sequencer track type for placing audio sections on a timeline, supporting both scene-level and actor-bound audio. |
USoundWave | The base audio asset created when importing WAV files, used directly in Sequencer audio sections. |
USoundCue | A legacy audio graph asset that can apply randomization, mixing, and modulation before playback. |
FSoundAttenuationSettings | Configuration for how a sound fades with distance, including inner/outer radius and falloff curves. |
UDialogueWave | A dialogue-specific audio asset that supports localization with per-speaker context information. |
Sound Submix | An audio routing bus for grouping sounds (dialogue, music, SFX) so they can be mixed independently. |
Related Skills
sequencer-basics
Timeline fundamentals for placing and timing cinematic content
animation-playback
Play and blend animations for dialogue and performance sync
metahuman-setup
Configure MetaHuman characters for dialogue scenes
face-capture
Facial capture and lip sync for dialogue-driven performances
What You'll Learn
- How to place and time dialogue audio aligned to character lip sync in Sequencer
- How to score music with layered tracks, crossfades, and beat-aligned cuts
- How to use spatial audio attenuation for realistic 3D-positioned sounds
- How to automate volume for fading, ducking, and dynamic mixing
- Best practices for organizing audio tracks and naming conventions
- How to trigger sound effects with frame-accurate precision in cinematic sequences