Warning! Seems that the quantization is broken and the output is pure garbage EXL2 quants of Qwen2.5-1.5B-Instruct

4.5 bits per weight
6.0 bits per weight
8.0 bits per weight

Model Size
4.5 bpw 1.39 GB
6.0 bpw 1.64 GB
8.0 bpw 2.01 GB
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for adriabama06/Qwen2.5-1.5B-Instruct-exl2

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1433)
this model