Flux 2 is finally out, When the Flux 2 release date landed, the first thing I wanted was a clean Flux2 AI ComfyUI workflow that I could run every day without touching the graph again.
So I built four simple workflows instead of one monster graph:
Text to image with optional auto prompt
Single image edit
Two image combine
Multi reference graph that can use up to ten images as input
In this article I will walk through the whole flux2 sec comfyui workflow setup the way I actually use it:
Which Flux 2 Dev model files I downloaded
Where I placed them in ComfyUI
How I switch between FP8 and Q4 GGUF
How I use auto prompt from an image so I barely type prompts now
How I mix up to ten references into one final Fusion image
If you have ever searched for things like Flux 2 dev, Flux 2 model download, Flux 2 open source or even weird stuff like flux2, fluxus, flux 250, flux 20mg, flux 2382, flux 2d, flux 2132 and ended up here, you are in the right place. This page is about Flux2 AI image generation inside ComfyUI, not medicine, not audio plugins, and not a FLUX 2 Smart trainer bike stand.
Files you need to download for this flux2 sec comfyui workflow
Before you open any of my graphs, you need three main model files and one optional GGUF model if your GPU VRAM is small. I will keep it short and practical so you can double check your own setup.
1. Flux 2 Dev main model
Flux 2 Dev is heavy. There are a couple of builds.
When I grabbed the full Flux 2 Dev model, the sizes looked roughly like this:
flux2-dev.safetensors Full build around 64 GB
- flux2_dev_fp8mixed.safetensors Lighter build around 34 GB
If I load the full Flux 2 Dev model plus the text encoder on my RTX 5090 with 32 GB VRAM, a single 2K image can take around two and a half minutes. It works, but it feels more like a benchmark than something I want to use every day.
So I moved to the FP8 and GGUF options.
I keep both FP8 and Q4 at hand, so I can switch based on what I am doing that day.
FP8 Flux 2 Dev
I use this as my daily driver now. It is much more friendly for consumer cards and still gives clean images.GGUF quantized Flux 2 Dev :- https://huggingface.co/orabazes/FLUX.2-dev-GGUF/tree/main
There are GGUF versions in Q3, Q4, Q5, Q6, Q8.
From my tests, Q4 is the sweet spot. Q3 drops quality too much, Q5 and above start to eat more memory again.
With Q4 GGUF I get quality that is very close to FP8 but with much lower VRAM use. So if your GPU is small, start with Flux 2 Dev Q4 GGUF.
2. Flux 2 text encoder
Flux 2 does not use the old CLIP or T5 style text encoders. It comes with a Mistral based text encoder, and you will see two files for it:
mistral_3_small_flux2_bf16.safetensors One BF16 build
mistral_3_small_flux2_fp8.safetensors One FP8 build
How I use them:
If you are on low VRAM, use the FP8 text encoder
If you have a strong GPU, you can try the BF16 text encoder as well
I keep both files in place and just switch the node inside ComfyUI when I test
3. Flux 2 VAE file
You also need the Flux 2 VAE.
Look for a file named something close to:
That is the one I am using. This keeps the final colors stable across all four workflows.
4. Folder locations in ComfyUI
Here is exactly where I drop my files inside the ComfyUI install:
Flux 2 main model and GGUF models go into
ComfyUI/models/diffusion_modelsThe Flux 2 VAE goes into
ComfyUI/models/vaeThe Flux 2 text encoder goes into
ComfyUI/models/text_encoders
Once these three folders are correct, all of my flux2 comfyui workflow graphs open without any red error nodes.
If you came here by searching “Flux 2 model download” or “Flux 2 dev FP8 GGUF” and saw random stuff like Flux 2 ghost or Flux 2 Portronics in the results, just ignore those. Those usually belong to other products. Here we only care about the image model and how it runs inside ComfyUI.


