Inpainting

Edit specific image areas (e.g., object removal, detail refinement) while preserving overall composition.

Screenshot of the complete ComfyUI AI influencer workflow. This all-in-one template features a Face Detailer, Refiner, and Upscaler to automatically fix faces and hands for professional-grade images.

Qwen Edit 2509 - Image Edit with multi images input and Multi Lora Loader

ComfyUI Workflow: Qwen Edit 2509 - Image Edit with multi images input and Multi Lora Loader Qwen Edit 2509 is a ComfyUI workflow designed for efficient image editing, allowing you to process multiple images and apply various LORAs quickly. It supports a seamless editing process from input to comparison, aiming for high-quality results. What makes Qwen Edit 2509 special - **Multi-image input**: Process several images simultaneously within the workflow. - **Rapid generation**: Achieve edited outputs in just a few seconds, leveraging lighting LORAs. - **Flexible LORA application**: Load and use multiple LORAs, supporting various Qwen image LORAs. - **Integrated comparison**: Easily review changes with an included image comparison slider. - **Optimized sampling**: Inputs are automatically scaled to 1M pixels for superior sampling quality. How to use - **Load images**: Place your images into the image loader node. They will automatically scale for optimal sampling. - **Select LORAs**: Optionally choose your desired LORAs. Many Qwen image LORAs from Civitai are compatible. - **Input prompt**: Write your desired prompt. You can save prompts using the Prompt Stasher. - **Sampler settings**: - For lighting LORAs, use default sampler settings. - Without lighting LORAs, set steps between 20 and 50 and CFG around 2.5. - **Adjust shift**: Set the shift value, typically between 1.5 and 3.0. - **Generate and compare**: Run the generation process and use the slider node to view the differences. - **Custom image size**: Connect an empty latent node to the VAE encode for custom dimensions. It is recommended to use dimensions that are multiples of 112 for Qwen models. Recommended settings - **Image resolution**: Input images are scaled to 1M pixels for consistent quality. - **LORA compatibility**: While most Qwen image LORAs should work, some may not be compatible. - **Sampler steps**: Use 20-50 steps when not using lighting LORAs for balanced speed and quality. - **CFG scale**: A CFG of approximately 2.5 is suggested for non-lighting LORA use. - **Shift value**: A range of 1.5 to 3.0 generally yields good results for image adjustments. Why use this workflow - **Speed and efficiency**: Quickly edit multiple images with fast generation times. - **High-quality output**: Automatic image scaling and optimized settings contribute to enhanced output quality. - **Versatile LORA support**: Experiment with various LORAs to achieve diverse editing styles. - **Streamlined process**: From loading images to comparing results, the workflow offers a straightforward editing experience.

Inpainting
Screenshot of the free ComfyUI workflow for OmniGen2. This multimodal AI enables advanced, instruction-based image editing, letting you use complex text commands for precise control over your creations.

OmniGen2: Image Edit

ComfyUI Workflow: OmniGen2 for Unified Multimodal Generation OmniGen2 is a ComfyUI workflow that utilizes a powerful and efficient unified multimodal generative model. With a total parameter size of about 7B (3B for text, 4B for image), it features an innovative dual-path Transformer architecture with independent text autoregressive and image diffusion models. This design allows for parameter decoupling and specialized optimization, supporting a wide range of visual tasks from understanding to generation and editing. What makes OmniGen2 special - **Unified multimodal capabilities**: Seamlessly integrates visual understanding, high-fidelity text-to-image generation, and advanced instruction-guided image editing. - **Advanced image editing**: Performs complex, instruction-based image modifications, achieving strong performance among open source models. - **Contextual generation**: Processes and combines diverse inputs including people, reference objects, and scenes to produce novel and coherent visual outputs. - **High visual quality**: Creates beautiful images with excellent detail preservation. - **Integrated text generation**: Capable of generating clear and legible text content within images. How it works - **Dual-path architecture**: Leverages a Qwen 2.5 VL (3B) text encoder alongside an independent diffusion Transformer (4B). - **Parameter decoupling**: Ensures that text generation and image generation are optimized independently, avoiding negative interactions. - **Omni-RoPE position encoding**: Supports multi-image spatial positioning and differentiation of identities. - **Comprehensive understanding**: Facilitates complex interpretation of both text prompts and existing image content. Why use this workflow - **Versatility**: A single unified architecture supports a broad spectrum of image generation and editing tasks. - **Optimized performance**: Independent model components lead to specialized optimization and improved output quality. - **Precise control**: Offers fine-grained control over image generation and editing through detailed instructions. - **Leading capabilities**: Delivers state-of-the-art results for instruction-guided image editing within the open-source domain. Use cases - **Creative content creation**: Generate detailed and coherent images from textual descriptions. - **Advanced visual editing**: Modify images with specific instructions, enabling complex alterations. - **Scene composition**: Combine various elements to construct new visual scenes and narratives. - **Graphical design**: Create images that require integrated and clear text elements.

Inpainting
Screenshot of the free ComfyUI workflow for ByteDance USO. This powerful model unifies style and character transfer, using the FLUX.1 architecture to maintain subject identity and apply artistic styles.

HiDream E1.1: Image Edit

ComfyUI Workflow: HiDream E1.1 for Simple Image Super-Resolution HiDream E1.1 is a ComfyUI workflow designed for **super-resolution tasks, enhancing image quality and detail** from low-resolution inputs. This workflow offers a straightforward and efficient method to generate high-definition outputs without complex configurations. What makes HiDream E1.1 special - **High-definition output**: Directly generates improved images from low-resolution sources. - **User-friendly**: Simple workflow suitable for all users, requiring minimal setup. - **Artistic style preservation**: Effectively restores details, reduces noise, and retains the original artistic style, particularly strong for anime and illustrations. - **Flexible integration**: Supports combination with other ComfyUI nodes for complex image processing workflows. How it works - **Load input**: Users load the low-resolution image using the "LoadImage" node. - **Model inference**: The image connects to the "HiDream-E1" model node for super-resolution processing. - **Save output**: The processed high-definition image is then saved via the "SaveImage" node. Quick start in ComfyUI - **Inputs**: A low-resolution image for enhancement. - **Load workflow**: Open the HiDream E1.1 ComfyUI graph. - **Connect nodes**: Link your `LoadImage` node to the `HiDream-E1` model, then connect the model's output to a `SaveImage` node. - **Generate**: Run the inference to produce your enhanced, high-definition image. Why use this workflow? - **Streamlined enhancement**: Provides a powerful solution for image enhancement without requiring advanced technical knowledge. - **Quality restoration**: Ideal for improving clarity and detail in images, especially for stylized content. - **Creator support**: Offers a robust tool for creators needing to upscale and refine their visual assets. Use cases - **Anime and illustration enhancement**: Improve resolution and detail while maintaining the unique artistic characteristics. - **General image upscaling**: Turn low-resolution photos or graphics into higher-quality versions. - **Integration into larger pipelines**: Combine with other nodes for advanced creative or production workflows. References - [https://github.com/HiDream-ai/HiDream-E1](https://github.com/HiDream-ai/HiDream-E1) - [https://huggingface.co/HiDream-ai/HiDream-E1-1](https://huggingface.co/HiDream-ai/HiDream-E1-1)

Inpainting
Screenshot of the free ComfyUI workflow for pose transfer and character replacement. It uses Wan Animate and DWPose models to animate a static image by copying motion from a reference video.

AI Clothes Remover -- Clothing Editor

ComfyUI Workflow: AI Clothes Remover -- Clothing Editor This ComfyUI workflow is designed to edit images by removing clothing from a person. It can process both uploaded real photos and AI-generated images, providing a structured approach to achieve a clothes-removed effect. What it does - **Clothing removal**: Systematically removes apparel from a person in an image. - **Flexible input**: Works with user-uploaded images or AI-generated subjects. - **Pose and effect enhancement**: Uses prompt words to refine the final pose and the visual outcome of the clothing removal. How it works - **Initial segmentation**: A segmentation model isolates the person from the background and delineates clothing and body areas, creating precise masks. - **Targeted removal**: Flux and redraw nodes are applied to these generated masks to perform the clothing removal. - **Prompt conditioning**: Optional prompt inputs help guide and enhance the pose and the overall visual effect of the removal process. Quick start in ComfyUI - **Step 1: Load Image**: Begin by uploading the image of the person you wish to edit. - **Step 2: Declare Modified Part**: Use the BBOX and GDino nodes to specify the exact clothing areas intended for removal. You can declare multiple parts if needed. - **Step 3: Input Prompt (Optional)**: Provide a prompt only if the results deviate from expectations or require specific artistic control. - **Step 4: Get Image**: Run the workflow to generate the final image with the clothing removed. Recommended usage - **Precise selection**: Carefully define the parts to be removed in the BBOX and GDino nodes for accurate results. - **Minimal prompting**: Prompts are generally not required unless fine-tuning or correcting unexpected outcomes is necessary. Why use this workflow - **Automated process**: Streamlines the complex task of clothing removal using a sequence of AI models. - **Controlled editing**: Allows specific declaration of areas to be modified, offering precise control over the removal. - **Refinement capability**: Offers an optional prompt input for further control over the generated image's pose and aesthetic. Use cases - **Artistic expression**: Create conceptual or artistic images exploring human form. - **Character design**: Modify character appearances for various creative projects. - **Privacy-safe rendering**: Generate altered versions of images for specific visual studies without original attire.

Inpainting

Filters