Close Menu
    What's Hot

    MiniMax-M1 Now #1 Open Source Model in Math

    July 2

    How to Generate Animal Olympics Videos with AI

    July 1

    MeiGen-MultiTalk: Open Source Audio Driven Multi-Person Videos

    June 30
    Facebook X (Twitter) Instagram
    • AI Robots
    • AI News
    • Text to Video AI Tools
    • ChatGPT
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Rad NeuronsRad Neurons
    • AI Robots
      • AI Coding
    • ChatGPT
    • Text to Video AI
    Subscribe
    Rad NeuronsRad Neurons
    Home » Video to Video Editing with Genmo Mochi via ComfyUI
    AI News

    Video to Video Editing with Genmo Mochi via ComfyUI

    AI NinjaBy AI NinjaNovember 51 Min Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Genmo Mochi 1 is another tool that you can use to generate AI videos from text prompts. As it turns out, you can now use it for video to video editing. As logtd explains, a set of ComfyUI nodes are now published that use Mochi to edit videos. Here is how it works:

    the input video is inverted into noise and then this noise is used to resample the video with the target prompt.

    These nodes are made to work with ComfyUI-MochiWrapper nodes. You will just have to install the wrapper, clone this repository into your custom_nodes directory, and you are set. With Mochi unsampler and Resampler, video is converted into noise and the other way around.

    Just published a set of ComfyUI nodes to use Genmo’s Mochi to edit videos.https://t.co/vz3rjMeSgA

    It uses rf-inversion, the gift that keeps on giving. pic.twitter.com/S0voGRCXZL

    — logtd (@logtdx) November 2, 2024

    [HT]

    video to video
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSidekick Spatial AI Companion for Vision Pro
    Next Article Mini Pupper 2 AI Robot Dog with ChatGPT, Gemini Support
    AI Ninja
    • Website

    Related Posts

    AI News

    MiniMax-M1 Now #1 Open Source Model in Math

    July 2
    AI News

    Hunyuan-A13B Open Source LLM Debuts, Competes with o1, DeepSeek

    June 27
    AI News

    Prompt Cannon: Run Prompts Across Multiple Models

    June 24
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    GPTARS: GPT Powered TARS Robot

    November 21529 Views

    Wan2.1 Open Video Model Produce Stunning Videos

    February 265 Views

    OpenAI’s o3 and o4-mini Models Know How to Use Tools

    April 171 Views
    More
    AI News

    MiniMax-M1 Now #1 Open Source Model in Math

    AI NinjaJuly 2
    AI News

    Hunyuan-A13B Open Source LLM Debuts, Competes with o1, DeepSeek

    AI NinjaJune 27
    AI News

    Prompt Cannon: Run Prompts Across Multiple Models

    AI NinjaJune 24
    Most Popular

    Prompt Cannon: Run Prompts Across Multiple Models

    June 24731 Views

    GPTARS: GPT Powered TARS Robot

    November 21529 Views

    Simple Grok 2 Jailbreak

    December 16454 Views
    Our Picks

    MiniMax-M1 Now #1 Open Source Model in Math

    July 2

    How to Generate Animal Olympics Videos with AI

    July 1

    MeiGen-MultiTalk: Open Source Audio Driven Multi-Person Videos

    June 30
    Tags
    3D agent AI AI glasses AI model ai video app Blender canvas ChatGPT Chess Claude coding coding agent Deep Research DeepSeek ElevenLabs Gemini glasses GPT GPT 4.5 image kling leonardo LLM Manus MCP midjourney Mini PC model music o3 open source pdf QWEN robot runway sora Suno text to video Veo 2 Vibe coding video video model Voice

    © 2025 Rad Neurons. Inspired by Entropy Grid
    • Home
    • Terms of Use
    • Privacy Policy
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.