WAN 2.2 ComfyUI: Infinite Video on Low VRAM (GGUF Q5)

I built a simple ComfyUI Subgraph that does one thing really well. It keeps a shot running and lets you push the length as far as you want. No looping. No weird morphs. Mid-clip you can change the motion with a single line, and it stays clean. I run a Q5 GGUF model, so even a low-VRAM PC can handle it.

The short version

  • Unlimited length by stacking parts. Each part is a Subgraph block.
  • Clean cutover. Change motion halfway with one line, no jitter.
  • Low VRAM friendly. Use WAN 2.2 GGUF Q5.
  • Fixed cadence. 81 frames at 16 fps for every part.
  • Three samplers per part. 1 step without LoRA, then 3 and 3 with LoRA.
  • Per-part seeds. If only Part 4 looks off, randomize only that seed and re-run that block. Others stay exact.

What changed for me

Earlier my graph was massive. Making a 40-second clip meant new groups with a pile of nodes. Subgraph fixed that. I packed each chunk as a clean block. Now I just duplicate a Part, hook it up, and move on.

Files you need

For WAN 2.2 unlimited video workflow

Put these files in the right ComfyUI folders.

1. High-Noise GGUF model
Wan2.2-I2V-A14B-HighNoise-Q5_K_M.gguf
Folder: ComfyUI/models/diffusion_folder
(you load it with the UNET GGUF Loader node)

2. Low-Noise GGUF model
Wan2.2-I2V-A14B-LowNoise-Q5_K_M.gguf
Folder: ComfyUI/models/diffusion_folder

3. LoRA for Low Noise
wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors
Folder: ComfyUI/models/loras

4. LoRA for High Noise
wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors
Folder: ComfyUI/models/loras

Optional. stronger motion LoRA
PUSA WAN 2.2 V1 for 4-step motion
Folder: ComfyUI/models/loras
Use later if you want more motion. Keep it bypassed at first.

Setup that works on a Loe VRAM CPU

  1. Load it in ComfyUI and upload one source image.
  2. Pick models inside each Part group
    • High-Noise WAN model
    • Low-Noise WAN 2.2 model in GGUF Q5
    • Text encoder
    • VAE
    • Two LoRA slots are there. They are bypassed by default. Add motion LoRA later if you want extra movement.
    • GGUF Q5 keeps memory in check while holding quality. It is a common sweet spot for low VRAM systems.
  3. Choose a safe resolution with Resolution Master
    • Preset WAN
    • Start at 720×720
    • Autofit on
    • If 1024×1024 get an allocation error, stay at 720×720. Heavy source images might need sizes like 624×832.
  4. Set these two things once and do not change them
    • Frames: 81
    • FPS: 16
  5. Sampler steps
    • Sampler 1 without LoRA: 1 step
    • Sampler 2 with LoRA: 3 steps
    • Sampler 3 with LoRA: 3 steps
    • You can push the first one to 2 or 3 if your scene needs more stability. Same for the others if you want a little extra detail.
  6. Seeds
    • Keep seeds Fixed for Parts that look right
    • Set only the broken Part to Random and re-run just that group

Prompts that keep motion steady

Video 1 Prompt
Winter village. Soft snow. Camera shows the boy from head to boots. He smiles. He holds a small axe and points it.

Video 2 Prompt
He starts running. Small puffs of snow. Axe down by his side. Cozy houses pass in the background.

Video 3 Prompt
He runs faster. Gentle camera move for a cinematic feel.

Video 4 Prompt
A small wooden stump appears and he sets the axe on the stump safely.

The handoff between parts is smooth because the timing and steps stay the same. I just change one line for the new motion.

How to extend to unlimited time

  • Copy the whole Part group.
  • Connect the matching hooks
    Setup to Setup
    No-LoRA to No-LoRA
    Width to Width
    Height to Height
    Num Frames to Num Frames
    First Image to Start Image
    Video to Video
  • Add the new Part to Combine Video.
  • Copy the Global Seed if you want the same look, or switch it to Random.
  • Add one line to the prompt for the new beat. For example he picks up the axe and walks again. Run it.
  • As clips get very long in a single pass you might see small color drift. I keep one run to about 20 to 25 seconds, then add more Parts in the next pass.

Second test that taught me a quick fix

Scene was a sci-fi desert with two soldiers. My source image was 1176×1164 and blew RAM on my card. I dropped to a safe size. First tried 720×1080, then 624×832 to be extra safe. Kept steps at 1, 3, 3. Wrote short action lines and a small camera move in each Part. Looked cinematic. Only Part 4 felt off, so I changed its prompt and seed. Re-ran only that Part. Done.

FAQ

Can I really keep adding Parts forever
Yes. You can stack Parts to any length. Keep the same 81-frame cadence and connect the hooks correctly. If drift shows up after a long run, extend in new passes with fresh Parts.

Why Q5 GGUF
Lower memory, decent quality. On small GPUs and even CPU-heavy runs, Q5 is a practical choice. Try other quants if your hardware is different.

Do I need both High-Noise and Low-Noise models
For this workflow, yes. That pairing keeps the denoise path consistent across Parts.

Free Download – Verify your email once, then just enter it to get any workflow instantly in 1 click

If your email is not verified, you’ll first receive a verification email when you click. Once verified, just enter your email here to instantly download the workflow

By Esha

Studied Computer Science. Passionate about AI, ComfyUI workflows, and hands-on learning through trial and error. Creator of AIStudyNow — sharing tested workflows, tutorials, and real-world experiments.