
Genmo
A creative research lab pioneering high-fidelity video generation through open-weights excellence.

Granular pixel-level motion control for cinematic generative video.
25,047
Views
–
Saves
Available
API Access
Community
Status
Granular pixel-level motion control for cinematic generative video.
MotionBrush is a sophisticated spatial control feature integrated within the Runway Gen-2 and Gen-3 Alpha architectures. Historically, generative video models suffered from 'global motion' issues where the entire frame would shift unpredictably. MotionBrush solves this by allowing creators to apply a weighted mask to specific regions of a static image, instructing the latent diffusion model to generate temporal variance only within those localized pixel coordinates. As of 2026, the tool has evolved to support multi-brush layering, allowing for independent motion vectors (directional, proximity, and scale) within a single generation. It utilizes a dedicated optical flow estimation layer that maps user brush strokes to 3D trajectory data, which the Gen-3 model then interprets during the denoising process. Positioned as a professional-grade VFX tool, it bridges the gap between unpredictable AI generation and traditional keyframe animation, making it a staple in high-end advertising, social media content, and pre-visualization workflows. Its market position is solidified by its deep integration into the Runway Creative Suite, providing an ecosystem where assets move seamlessly from generation to post-production.
Granular pixel-level motion control for cinematic generative video.
Quick visual proof for MotionBrush (by Runway). Helps non-technical users understand the interface faster.
MotionBrush is a sophisticated spatial control feature integrated within the Runway Gen-2 and Gen-3 Alpha architectures.
Explore all tools that specialize in selective animation. This domain focus ensures MotionBrush (by Runway) delivers optimized results for this specific requirement.
Open side-by-side comparison first, then move to deeper alternatives guidance.
Assign up to 5 independent motion masks with distinct X, Y, and Z axis parameters.
Controls the Z-axis (zoom-in/zoom-out) motion relative to the camera lens metadata.
Injects Perlin noise into the latent space to simulate micro-vibrations and natural randomness.
Adjustable edge softness for masks to ensure smooth transitions between animated and static pixels.
Forces motion to adhere strictly to a vector, preventing model-based drift.
Low-fidelity latent preview of motion paths before full generation.
Post-denoising algorithm that stabilizes high-frequency jitter in brushed areas.
Static clothing photos look unappealing on social media feeds.
Upload model photo
Brush only the fabric of the dress
Set horizontal swing and low ambient noise
Generate 5s loop for Instagram.
Stiff renders of buildings lack the 'lived-in' feel.
Upload exterior render
Brush clouds and water features
Set slow horizontal motion for clouds
Combine with text prompt 'timelapse sky'.
Creating complex particle effects for logos takes hours in After Effects.
Upload high-contrast logo
Brush edges of the logo
Set proximity to 'in' and high ambient noise
Generate an organic-looking reveal.
Budget constraints prevent hiring a liquid simulation artist.
Upload scene of a character by a campfire
Brush only the flames
Set vertical movement and high speed
Export as ProRes for clean compositing.
Adding life to AI-generated avatars without ruining facial symmetry.
Upload avatar
Selectively brush hair and eyes only
Set micro-movements in vertical axis
Produce a 'living portrait' for profile headers.
Scientific diagrams of weather systems need motion to be understood.
Upload diagram of hurricane
Brush the spiral arms
Apply 'Proximity' to simulate rotation
Export as educational video snippet.
Archival photos feel disconnected from modern audiences.
Upload B&W photo of a city street
Brush smoke from chimneys and distant pedestrians
Apply subtle horizontal motion
Colorize and animate for a 'living history' effect.
Navigate to the Runway dashboard and select the 'Gen-3 Alpha' or 'Gen-2' video generation tool.
Upload a high-resolution base image (16:9 or 9:16 aspect ratio recommended).
Click the 'MotionBrush' icon located in the image preview toolbar.
Select 'Brush 1' and adjust the brush size based on the target area (e.g., a flowing river or hair).
Paint the area of the image you wish to animate; use the eraser tool to refine the mask edges.
Use the 'Horizontal', 'Vertical', and 'Proximity' sliders to define the primary movement direction.
(Optional) Add 'Brush 2' or 'Brush 3' to apply different motion parameters to other areas of the image.
Adjust the 'Ambient Noise' slider to control the intensity of secondary, natural micro-movements.
Click 'Save' and enter a text prompt to provide context for the style of motion (e.g., 'gentle flowing').
Hit 'Generate' to process the 5-10 second video clip using credits.
All Set
Ready to go
Verified feedback from other users.
“Users praise the granular control and professional-grade output, though some note a learning curve for the 'Proximity' slider.”
No reviews yet. Be the first to rate this tool.
Official Website
Try MotionBrush (by Runway) directly — explore plans, docs, and get started for free.
Visit MotionBrush (by Runway)Choose the right tool for your workflow
Better for quick, stylized social media animations with a simpler UI.
Superior for 360-degree camera pans and extreme 3D spatial changes.

A creative research lab pioneering high-fidelity video generation through open-weights excellence.

Transforms communication with audiences through intelligent video automation, delivering measurable business results.

Transform podcast audio into viral video content with AI-driven automation and multi-channel distribution.

Hyper-personalized AI video generation for hyper-growth outbound sales.

Turn text prompts into production-ready videos with automated scripting, voiceovers, and media curation.

The ultimate AI creative lab for audio-reactive video generation and motion storytelling.