ComfyUITemplates.com

Discover free ready-made ComfyUI templates for AI workflows.

Wan2.2 Animate: Replace Anyone in A video

ComfyUI Workflow: Wan2.2 Animate for Video Character Replacement Wan2.2 Animate is a ComfyUI workflow that enables straightforward character replacement within a video. This workflow simplifies the process of swapping a character with a new one. How it works - **Input preparation**: Provide a source video and a reference image. - **Character identification**: Utilize the points editor node to define two points on the video, indicating the character to be swapped. - **Automated replacement**: The workflow processes this information to replace the designated character. - **Enhanced face swapping**: It integrates Reactor to refine and improve the face swapping quality. Inputs - Source video - Reference image Output - A video featuring the replaced character.

Screenshot of the ComfyUI batch watermark remover and adder workflow. This AI tool uses SAM and GroundingDINO to automatically find and inpaint over unwanted logos.

Wan2.2 Animate replaces a character in a video using a reference image and two points, then refines the faceswap with Reactor.

Similar listings in category

A ComfyUI workflow for the Infinite Talking model, generating infinite talking videos from an audio input and a video or image input, with improved speed and quality over prior models like Multitalk.

Infinite Talking Model on Multitalk

ComfyUI Workflow: Infinite Talking Model on Multitalk The Infinite Talking model is a ComfyUI workflow designed for creating talking head videos. It allows you to generate videos where a character speaks continuously, driven by an audio input. What this workflow offers - **Infinite Talking Videos**: Produce videos where a subject appears to talk for an extended duration. - **Flexible Input**: Use either a video or an image as the visual source, combined with an audio input. - **Enhanced Performance**: Offers significant improvements in generation speed and output quality compared to previous models like Multitalk. How it works - **Audio-driven Synthesis**: An audio track guides the speech and facial movements of the character. - **Visual Source Integration**: A selected video or image provides the visual identity for the talking character. Why use this workflow - **Rapid Content Creation**: Quickly generate engaging talking head videos for various applications. - **High Fidelity Output**: Benefit from advanced quality settings for more realistic and stable results. - **Efficiency**: Experience faster processing times, making iteration and production more streamlined. Use cases - **Narrative Content**: Create narrations or presentations with a virtual presenter. - **Educational Videos**: Generate talking avatars for tutorials or lectures. - **Animated Storytelling**: Bring static images or short video clips to life with spoken dialogue.

Screenshot of the ComfyUI workflow for turning photos into Ghibli-style anime. This template uses a LoRA to capture the enchanting, hand-painted aesthetic of Hayao Miyazaki.

Wan 2.2 Light2X Image-to-Video

ComfyUI Workflow: Wan 2.2 Light2X Image-to-Video This ComfyUI workflow utilizes Wan 2.2 and the Light2X Lora to efficiently generate video from images. It focuses on rapid production and flexible output, enabling users to create dynamic clips with optimized performance. What makes this workflow efficient * **Fast Generation**: Produce videos in as few as 4 steps, significantly reducing processing time. * **Multi-Dimensional Output**: Supports various video resolutions including 480p, 720p, and custom sizes. * **High Frame Rate**: Generates 32 frames per second videos directly through integrated interpolation using FILM VFI. * **Optimized Model Loading**: Employs FP8 models to accelerate the generation process, leading directly to sampling. * **Streamlined Lora Management**: The RGTHREE Power Lora Loader allows for loading multiple loras without requiring numerous nodes. Generation speed highlights * A 5-second 480x832 video can be generated in approximately 1 minute 10 seconds after an initial run. * A 5-second 720x1280 video can be generated in about 3 minutes 30 seconds after an initial run. * This efficiency supports generating multiple short videos quickly. How to use this workflow * **Model Loading**: The workflow uses both High noise and Low noise Wan 2.2 models. FP8 models are advised for faster processing. * **Lora Integration**: Use the RGTHREE Power Lora Loader. Set the high_noise_model loader at strength 2 and the low_noise_model loader at strength 1. For other loras, load the same lora file into both loaders, with the first loader at double strength. * **VAE and Clip**: Always use the Wan 2.1 VAE. FP8 is recommended for Clip for balance between speed and quality. * **Image and Prompt**: Load your source image and select desired output dimensions. Input your text prompt and specify video length (e.g., 81 frames for 5 seconds, 160 frames for 10 seconds). * **Sampling**: Two KSamplers are utilized, one for each noise model. Light2X enables generation in just 4 steps. Euler / Simple or Euler / beta are recommended samplers. Keep CFG at 1 to maintain quality and avoid artifacts. * **Video Assembly**: The initial video output is at 16fps, which is then interpolated by FILM VFI to achieve a final 32fps video. Lora usage guidance * Wan 2.2 Loras are highly recommended, requiring both High and Low noise models from their respective Civitai pages. * Wan 2.1 Loras are compatible. Load the same lora file in both loaders, with the first (High noise) loader set at double the strength of the second. This workflow offers a streamlined and efficient method for creating high-quality, high-framerate videos from images within ComfyUI.

Screenshot of the ComfyUI workflow for animating any image into a Ghibli-style video. This template transforms a static photo into a living scene with the iconic Miyazaki aesthetic.

WAN 2.2 Text-to-Video

ComfyUI Workflow: WAN 2.2 Text-to-Video V2 Fast Workflow The **WAN 2.2 Text-to-Video V2 Fast Workflow** is an upgraded ComfyUI solution designed to generate **cinematic HD-quality videos** from a simple text prompt. This V2 edition introduces **prompt extension capabilities** and **enhanced color grading** for more vivid, high-contrast visuals. Built for **speed and quality**, it leverages WAN 2.2’s advanced motion rendering and smarter prompt handling to interpret creative ideas with greater detail, depth, and visual richness. What makes WAN 2.2 V2 special - **Prompt Extension System**: Expands your prompt intelligently for richer, more dynamic scene generation. - **Better Colors & Brightness**: Delivers improved color grading, contrast, and visual vibrancy for cinematic impact. - **Fast Rendering**: Features an optimized pipeline for quick turnaround without sacrificing detail. - **Text-to-Video Simplicity**: Input a prompt to get a smooth, cinematic video in HD. - **Enhanced Visual Depth**: Offers balanced tones, richer colors, and improved brightness. - **High-Speed Generation**: Creates professional-quality outputs in minimal time. How to get started in ComfyUI - Provide a text prompt describing your desired video. - The workflow processes the prompt to generate a cinematic HD video. Recommended settings - For best results, ULTRA PRO is recommended. - Adjust shift values if necessary for fine-tuning. - Test with other style and character loras to explore diverse visual styles. Why use this workflow - **Faster creative output**: Quickly generate AI-powered videos from text. - **Visually rich content**: Achieve more vibrant, high-contrast, and detailed cinematic visuals. - **Simplified workflow**: Create professional-quality video content with ease. Use cases - Rapid prototyping for video concepts. - Generating stylized or cinematic short clips. - Creating engaging visual content for various platforms. Conclusion The **WAN 2.2 Text-to-Video V2 Fast Workflow** offers a streamlined, high-quality method within ComfyUI for generating faster, richer, and more vibrant AI-generated videos, featuring enhanced visual depth and intelligent prompt handling.