YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SD2.1-Nitro-GGUF

This repository contains GGUF conversions of the amd/SD2.1-Nitro model.

Model Information

These conversions were made with sdcpp and are provided in the following quantizations:

  • sd2.1_nitro_fp16.gguf: FP16 precision
  • sd2.1_nitro_q8_0.gguf: Q8_0 quantization
  • sd2.1_nitro_q4_K.gguf: Q4_K quantization

Usage

These GGUF files can be used with compatible inference engines that support GGUF format for Stable Diffusion models.

Downloads last month
94
GGUF
Model size
1B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support