Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are there any projects that allow for easy setup and hosting Flux locally? Similar to SD projects like InvokeAI or a1111


The answer is it really depends on your hardware, but the nice thing is that you can split out the text encoder when using ComfyUI. On a 24gb VRAM card I can run the Q8_0 GGUF version of flux-dev with the T5 FP16 text encoder. The Q8_0 gguf version in particular has very little visual difference from the original fp16 models. A 1024x1024 image takes about 15 seconds to generate.




Invoke is model agnostic, and supports Flux, including quantized versions.


Flux is more weird than old SD projects since Flux is extremely resource dependant and won't run on most hardware.


Doesn't take a lot of effort to get Flux dev/schnell to run on 3090s unquantized, but I agree that 24gb is the consumer GPU memory limit and there are many with less than that. Flux runs great on modern Mac hardware as well, if you have at least 32gb of unified memory.


I'm running Flux dev fine on a 3080 10GB, unquantised, on windows the nvidia drivers have a function to let it spill over into system ram. It runs a little slower, but it's not a deal-breaker unlike nvidia's pricing and power requirements at the moment


What are you using to run it? When I run Flux Dev in Windows using comfy on a 4090 (24 GB) sometimes it all crashes because it runs out of VRAM when I'm doing too much other stuff.


Not a good reference for windows -- I use HuggingFace APIs on cog/docker deployments in Linux. I needed to use `PYTORCH_NO_CUDA_MEMORY_CACHING=1 -e PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` envvars to eliminate memory errors on the 3090s. When I run on the Mac there is enough memory not to require shenanigans. Runs approximately as fast as the 3090s but the 3090s heat my basement and the Mac heats my face.


Really? I tried using it in ComfyUI on my Mac Studio, failed, went searching for answers and all I could find said that something something fp8 can't run on a Mac, so I moved on.


If you're looking for a prebuilt "no tinkering" solution https://diffusionbee.com/ is an open source app (Github link at the bottom of the page if you want to see the code) which has a built in button to import Flux models at the bottom of the home screen.


I usually don't want to comment on these, but: DiffusionBee's repo https://github.com/divamgupta/diffusionbee-stable-diffusion-... don't have any updates for 9 months except regular binary releases. There is no source code available for their recent builds. I think it is a bit unfair to say it is open-source app at this point given you are using a binary probably far different from the repo.


Thanks, I'll take a look.


I should have qualified that I run Flux.1 dev and schnell on a Mac via HuggingFace and pytorch, and am not knowledgeable about ComfyUI support for these models. The code required is pretty tiny though.


People have Flux running on pretty much everything at this point, assuming you are comfortable waiting 3+ minutes for a 512x512 image.

I managed to get it running on an old computer with a 2060 Super, taking ~1.5 minutes per image gen. People are generating on a 1080.


The GGUF quantisations do run on most recent hardware, albeit at increasingly concerning quality tradeoffs.


I haven't noticed any quality degradation with the 8-bit GGUF for Flux Dev, but I'm sure the smaller quantizations perform worse.


Using comfyui with the official flux workflow is easy and works nicely. comfy can also be used via API.


I use InvokeAI to run flux.dev and flux.schnell.


DrawThings on Mac




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: