Blog Post 1: HappyHorse 1.0 Announcement
ComfyUI Adds Native Support for HappyHorse 1.0 Cinematic Video Generation Model
April 27, 2026
ComfyUI has rolled out native support for HappyHorse 1.0, Alibaba’s cinematic video generation model optimized for high-quality storytelling and production-ready creative workflows. Per the official ComfyUI Blog announcement, the integration unlocks multi-modal video creation, cinematic aesthetic controls, and advanced editing tools for ComfyUI users. ComfyUITemplates.com, an independent directory that curates and showcases ComfyUI workflows and templates, notes that this update adds four new pre-built video workflows to ComfyUI’s native Template Library, simplifying setup for video creators.
What ComfyUI Is Announcing
The ComfyUI team has announced native support for HappyHorse 1.0, a cinematic video generation model developed by Alibaba, in the latest ComfyUI release. HappyHorse 1.0 is built for high-quality storytelling video and production-ready creative workflows, with a focus on strong aesthetics, multi-shot sequencing, and combined generation and editing capabilities. The model is positioned as a fit for creators building ads, e-commerce visuals, short-form content, and social marketing videos.
To access the new integration, users can either update to the latest version of ComfyUI or use Comfy Cloud. Pre-built HappyHorse workflows are available in ComfyUI’s native Template Library by searching for “HappyHorse”: users only need to update prompts or input images, then run the workflow. Four ready-to-use workflows are included: Image to Video, Text to Video, Video Edit, and Reference to Video.
Key Features in This Release
The HappyHorse 1.0 integration brings four core capability groups to ComfyUI, as outlined in the ComfyUI Blog post:
- Multi-modal creation paths: Supports Text-to-Video (T2V), Image-to-Video (I2V), and Subject-to-Video (S2V) workflows, letting creators start video generation from scratch, existing images, or specific subjects.
- Cinematic aesthetics: Built-in tools for wide-aperture framing, shallow depth of field, refined texture, and atmospheric mood, reducing the need for post-processing to achieve professional video looks.
- Multi-shot output: Generates clips up to 15 seconds long at 1080p resolution, with explicit focus on maintaining consistency across cut transitions for multi-shot sequences.
- Editing workflows: Includes Video-to-Video (V2V) and Subject Video-to-Video (SV2V) tools for transforming existing footage or replacing/inserting subjects while preserving original motion and composition.
Why This Matters for Workflow Creators
For ComfyUI users building video-focused workflows, the HappyHorse 1.0 integration removes the need to manually configure third-party model integrations, as native support and pre-built templates are included out of the box. Creators working on ads, e-commerce content, or social marketing videos gain access to production-ready tools for cinematic storytelling without specialized video editing expertise.
Template authors can now build and share HappyHorse 1.0-based workflows on directories like ComfyUITemplates.com, which helps other users discover pre-optimized setups for specific use cases (e.g., short-form social content, product demos). The pre-built workflows in ComfyUI’s Template Library also lower the barrier to entry for new ComfyUI users testing video generation for the first time.
How This Affects ComfyUI Templates and Apps
ComfyUI’s native Template Library now includes four official HappyHorse 1.0 workflows, which ComfyUITemplates.com catalogs alongside community-created templates for easy discovery. Template authors can iterate on these official workflows to build specialized variants (e.g., workflows optimized for 9:16 social media clips, or product demo sequences) and list them on ComfyUITemplates.com to reach more users.
For ComfyUI app builders, the native integration means HappyHorse 1.0’s capabilities can be embedded directly into custom ComfyUI-based tools without additional model setup, streamlining development of video generation apps for niche use cases.
Further Reading
Blog Post 2: Wan2.1 Video Model Announcement
ComfyUI Adds Native Support for Wan2.1 Video Model With Consumer GPU Compatibility
February 26, 2025
ComfyUI now supports the Wan2.1 video generation model natively, per a February 2025 announcement from the ComfyUI team. The integration includes 14B and 1.3B parameter workflow options, both optimized to run on consumer-grade GPUs. ComfyUITemplates.com, a directory for curated ComfyUI workflows, notes that this update makes high-quality video generation accessible to creators without access to enterprise-grade hardware.
What ComfyUI Is Announcing
The ComfyUI team has announced native support for the Wan2.1 video generation model, with the update shared via the official ComfyUI Blog on February 26, 2025. The integration includes two ready-to-use workflow options: 14B and 1.3B parameter versions of the Wan2.1 model. Per the announcement, both workflows are optimized to run on consumer-grade GPUs, lowering hardware barriers for video generation.
Key Features in This Release
The Wan2.1 integration brings the following to ComfyUI users, as per the announcement:
- Two pre-built video generation workflows: 14B (high-capacity) and 1.3B (lightweight) parameter versions
- Optimized for consumer GPUs, making high-quality video generation accessible to users without enterprise hardware
- Native support with no third-party configuration required
Why This Matters for Workflow Creators
For ComfyUI users with consumer-grade GPUs, the Wan2.1 integration removes the hardware barrier to high-quality video generation, as both included workflows are confirmed to run on non-enterprise hardware. Creators who previously could not test video generation due to hardware limitations can now access native Wan2.1 workflows immediately.
Template authors can build custom Wan2.1 workflows for niche use cases (e.g., low-light video generation, short-form content) and list them on ComfyUITemplates.com to reach other consumer GPU users.
How This Affects ComfyUI Templates and Apps
The 14B and 1.3B Wan2.1 workflows are now available for ComfyUI users to download and customize, with many expected to be shared on ComfyUITemplates.com’s directory of community-created workflows. Template authors can iterate on the official workflows to create specialized variants for specific hardware setups or use cases, then list them on the directory for discovery.
For app builders, native Wan2.1 support simplifies embedding video generation capabilities into custom ComfyUI-based tools for consumer hardware users.
Further Reading
Blog Post 3: WAN2.2 Animate & Qwen-Image-Edit 2509 Announcement
ComfyUI Adds Native Support for WAN2.2 Animate, Qwen-Image-Edit 2509 for Advanced Pose Reference and Image Editing
September 23, 2025
ComfyUI now supports WAN2.2 Animate and Qwen-Image-Edit 2509 natively, per a September 2025 announcement from the ComfyUI team. The update delivers a new level of pose reference capabilities and image editing tools to ComfyUI users. ComfyUITemplates.com, an independent directory of ComfyUI workflows and templates, notes that these integrations expand options for creators working on character animation and detailed image editing projects.
What ComfyUI Is Announcing
The ComfyUI team announced native support for two new models in September 2025: WAN2.2 Animate and Qwen-Image-Edit 2509. Per the official blog post, the update brings a new level of pose reference functionality and advanced image editing capabilities to the ComfyUI platform.
Key Features in This Release
The dual integration includes the following core capabilities:
- WAN2.2 Animate: Advanced pose reference tools for character animation workflows
- Qwen-Image-Edit 2509: Upgraded image editing functionality for detailed, production-ready image adjustments
- Native support for both models, with no manual third-party configuration required
Why This Matters for Workflow Creators
For creators working on character animation, WAN2.2 Animate’s improved pose reference tools reduce the time spent aligning character movements across frames. Qwen-Image-Edit 2509’s advanced editing capabilities benefit users creating detailed product images, concept art, or marketing visuals directly in ComfyUI.
Template authors can build specialized workflows for pose-based animation or detailed image editing and list them on ComfyUITemplates.com to reach users seeking these niche tools.
How This Affects ComfyUI Templates and Apps
New WAN2.2 Animate and Qwen-Image-Edit 2509 workflows are expected to be added to ComfyUI’s ecosystem, with many community-created variants cataloged on ComfyUITemplates.com. Template authors can create workflows optimized for specific use cases (e.g., character rigging, product image retouching) and share them via the directory.
App builders can embed these advanced pose reference and image editing tools into custom ComfyUI-based apps for creative teams.
Further Reading
Blog Post 4: Native Partner Nodes & New Brand Announcement
ComfyUI Launches Native Partner Nodes Program, Unveils New Brand Identity With 11 Models and 65 Nodes
May 6, 2025
The ComfyUI team has announced the launch of ComfyUI Native Partner Nodes, alongside a new brand identity for the platform, per a May 2025 announcement. The update bundles 11 models and 65 partner nodes into the ComfyUI platform in a single release. ComfyUITemplates.com, a directory for ComfyUI workflows and templates, notes that the new partner nodes expand the range of pre-built tools available to creators without manual configuration.
What ComfyUI Is Announcing
The ComfyUI team shared two major updates on May 6, 2025: the launch of ComfyUI Native Partner Nodes, and a new brand identity for the ComfyUI platform. The Native Partner Nodes release includes 11 models and 65 nodes from partner organizations, all integrated directly into ComfyUI at once. Per the announcement, the new brand identity will roll out across all official ComfyUI channels.
Key Features in This Release
The update includes the following core additions:
- ComfyUI Native Partner Nodes: 65 nodes from partner organizations covering 11 models, all natively integrated
- New ComfyUI brand identity, unified across official platforms and tools
- One-click access to all partner nodes and models, no third-party setup required
Why This Matters for Workflow Creators
For ComfyUI users, the Native Partner Nodes eliminate the need to manually install and configure third-party nodes from partner organizations, as 65 nodes across 11 models are now built into the platform. This reduces setup time for creators working with partner tools, and ensures compatibility with the latest ComfyUI releases.
Template authors can build workflows using the new partner nodes and list them on ComfyUITemplates.com, helping other users discover optimized setups for partner models.
How This Affects ComfyUI Templates and Apps
The 65 new partner nodes add a wide range of pre-built tools to the ComfyUI ecosystem, with many new workflows expected to be shared on ComfyUITemplates.com’s directory. Template authors can create workflows for specific partner models (e.g., specialized image generation, video editing nodes) and list them on the directory for discovery.
App builders can leverage the native partner nodes to embed a wider range of tools into custom ComfyUI-based apps without additional integration work.