The model works great in the 0.4 - 1.2 strength range.
I used 1 strength on the Flux Dev GGUF Q4 for the Showcase Images.
If you're using a low-end machine with 6/8 GB of VRAM, check out my guide:
https://civitai.com/articles/6846/running-flux-on-68-gb-vram-using-comfyui
All images in the Showcase Gallery were created using this workflow and the Flux Q4 Dev model on a machine with 8GB VRAM:
https://civitai.com/models/658639/super-simple-gguf-quantized-flux-lora-workflow
The model works great in the 0.4 - 1.2 strength range.
I used 1 strength on the Flux Dev GGUF Q4 for the Showcase Images.
If you're using a low-end machine with 6/8 GB of VRAM, check out my guide:
https://civitai.com/articles/6846/running-flux-on-68-gb-vram-using-comfyui
All images in the Showcase Gallery were created using this workflow and the Flux Q4 Dev model on a machine with 8GB VRAM:
https://civitai.com/models/658639/super-simple-gguf-quantized-flux-lora-workflow