FLUX gguf quantized files
The license of the quantized files follows the license of the original model:
- FLUX.1-schnell: apache-2.0
These files are converted using https://github.com/leejet/stable-diffusion.cpp
Run FLUX using stable-diffusion.cpp with a GPU that has 6GB or even 4GB of VRAM: https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/flux.md
- Downloads last month
- 7,361
Hardware compatibility
Log In to add your hardware
2-bit
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for leejet/FLUX.1-schnell-gguf
Base model
black-forest-labs/FLUX.1-schnell