Note: I made a mistake with the initial upload, and some tensors were initially not compressed, so I temporarily made the model private while I regenerated and uploaded the fixed version. If you downloaded the earlier version of the weights, please redownload them. The fixed version of the weights is ~200 MB smaller than before.

Update: Found and fixed another minor error, which saves a further ~0.16 MB in disk space.

For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11

Feel free to request for other models for compression as well (for either the diffusers library, ComfyUI, or any other model), although models that use architectures which are unfamiliar to me might be more difficult.

How to Use

transformers

  1. Install the DFloat11 pip package (installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed):

    pip install dfloat11[cuda12]
    # or if you have CUDA version 11:
    # pip install dfloat11[cuda11]
    pip install transformers==4.46.3
    
  2. To use the DFloat11 model, run the following example code in Python:

    from transformers import AutoModel, AutoTokenizer
    from dfloat11 import DFloat11Model
    import torch
    import os
    os.environ["CUDA_VISIBLE_DEVICES"] = '0'
    model_name = 'deepseek-ai/DeepSeek-OCR'
    
    tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
    model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
    model = model.eval().to(torch.bfloat16) # Casting the model to BF16 before calling `.eval()` appears to cause slightly different bounding box results
    DFloat11Model.from_pretrained("mingyi456/DeepSeek-OCR-DF11", device = "cpu", bfloat16_model = model)
    model = model.cuda()
    
    # prompt = "<image>\nFree OCR. "
    prompt = "<image>\n<|grounding|>Convert the document to markdown. "
    image_file = 'your_image.jpg'
    output_path = 'your/output/dir'
    
    # infer(self, tokenizer, prompt='', image_file='', output_path = ' ', base_size = 1024, image_size = 640, crop_mode = True, test_compress = False, save_results = False):
    
    # Tiny: base_size = 512, image_size = 512, crop_mode = False
    # Small: base_size = 640, image_size = 640, crop_mode = False
    # Base: base_size = 1024, image_size = 1024, crop_mode = False
    # Large: base_size = 1280, image_size = 1280, crop_mode = False
    
    # Gundam: base_size = 1024, image_size = 640, crop_mode = True
    
    res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)
    

Compression Details

This is the pattern_dict for compression:

pattern_dict={
    r"lm_head": [],
    r"model\.embed_tokens": [],
    r"model\.layers\.0": [
        "self_attn.q_proj",
        "self_attn.k_proj",
        "self_attn.v_proj",
        "self_attn.o_proj",
        "mlp.gate_proj",
        "mlp.up_proj",
        "mlp.down_proj"
    ],
    
    r"model\.layers\.[1-9]\d*": [
        "self_attn.q_proj",
        "self_attn.k_proj",
        "self_attn.v_proj",
        "self_attn.o_proj",

        "mlp.experts.0.gate_proj",
        "mlp.experts.0.up_proj",
        "mlp.experts.0.down_proj",
        "mlp.experts.1.gate_proj",
        "mlp.experts.1.up_proj",
        "mlp.experts.1.down_proj",
        "mlp.experts.2.gate_proj",
        "mlp.experts.2.up_proj",
        "mlp.experts.2.down_proj",
        "mlp.experts.3.gate_proj",
        "mlp.experts.3.up_proj",
        "mlp.experts.3.down_proj",
        "mlp.experts.4.gate_proj",
        "mlp.experts.4.up_proj",
        "mlp.experts.4.down_proj",
        "mlp.experts.5.gate_proj",
        "mlp.experts.5.up_proj",
        "mlp.experts.5.down_proj",
        "mlp.experts.6.gate_proj",
        "mlp.experts.6.up_proj",
        "mlp.experts.6.down_proj",
        "mlp.experts.7.gate_proj",
        "mlp.experts.7.up_proj",
        "mlp.experts.7.down_proj",
        "mlp.experts.8.gate_proj",
        "mlp.experts.8.up_proj",
        "mlp.experts.8.down_proj",
        "mlp.experts.9.gate_proj",
        "mlp.experts.9.up_proj",
        "mlp.experts.9.down_proj",
        "mlp.experts.10.gate_proj",
        "mlp.experts.10.up_proj",
        "mlp.experts.10.down_proj",
        "mlp.experts.11.gate_proj",
        "mlp.experts.11.up_proj",
        "mlp.experts.11.down_proj",
        "mlp.experts.12.gate_proj",
        "mlp.experts.12.up_proj",
        "mlp.experts.12.down_proj",
        "mlp.experts.13.gate_proj",
        "mlp.experts.13.up_proj",
        "mlp.experts.13.down_proj",
        "mlp.experts.14.gate_proj",
        "mlp.experts.14.up_proj",
        "mlp.experts.14.down_proj",
        "mlp.experts.15.gate_proj",
        "mlp.experts.15.up_proj",
        "mlp.experts.15.down_proj",
        "mlp.experts.16.gate_proj",
        "mlp.experts.16.up_proj",
        "mlp.experts.16.down_proj",
        "mlp.experts.17.gate_proj",
        "mlp.experts.17.up_proj",
        "mlp.experts.17.down_proj",
        "mlp.experts.18.gate_proj",
        "mlp.experts.18.up_proj",
        "mlp.experts.18.down_proj",
        "mlp.experts.19.gate_proj",
        "mlp.experts.19.up_proj",
        "mlp.experts.19.down_proj",
        "mlp.experts.20.gate_proj",
        "mlp.experts.20.up_proj",
        "mlp.experts.20.down_proj",
        "mlp.experts.21.gate_proj",
        "mlp.experts.21.up_proj",
        "mlp.experts.21.down_proj",
        "mlp.experts.22.gate_proj",
        "mlp.experts.22.up_proj",
        "mlp.experts.22.down_proj",
        "mlp.experts.23.gate_proj",
        "mlp.experts.23.up_proj",
        "mlp.experts.23.down_proj",
        "mlp.experts.24.gate_proj",
        "mlp.experts.24.up_proj",
        "mlp.experts.24.down_proj",
        "mlp.experts.25.gate_proj",
        "mlp.experts.25.up_proj",
        "mlp.experts.25.down_proj",
        "mlp.experts.26.gate_proj",
        "mlp.experts.26.up_proj",
        "mlp.experts.26.down_proj",
        "mlp.experts.27.gate_proj",
        "mlp.experts.27.up_proj",
        "mlp.experts.27.down_proj",
        "mlp.experts.28.gate_proj",
        "mlp.experts.28.up_proj",
        "mlp.experts.28.down_proj",
        "mlp.experts.29.gate_proj",
        "mlp.experts.29.up_proj",
        "mlp.experts.29.down_proj",
        "mlp.experts.30.gate_proj",
        "mlp.experts.30.up_proj",
        "mlp.experts.30.down_proj",
        "mlp.experts.31.gate_proj",
        "mlp.experts.31.up_proj",
        "mlp.experts.31.down_proj",
        "mlp.experts.32.gate_proj",
        "mlp.experts.32.up_proj",
        "mlp.experts.32.down_proj",
        "mlp.experts.33.gate_proj",
        "mlp.experts.33.up_proj",
        "mlp.experts.33.down_proj",
        "mlp.experts.34.gate_proj",
        "mlp.experts.34.up_proj",
        "mlp.experts.34.down_proj",
        "mlp.experts.35.gate_proj",
        "mlp.experts.35.up_proj",
        "mlp.experts.35.down_proj",
        "mlp.experts.36.gate_proj",
        "mlp.experts.36.up_proj",
        "mlp.experts.36.down_proj",
        "mlp.experts.37.gate_proj",
        "mlp.experts.37.up_proj",
        "mlp.experts.37.down_proj",
        "mlp.experts.38.gate_proj",
        "mlp.experts.38.up_proj",
        "mlp.experts.38.down_proj",
        "mlp.experts.39.gate_proj",
        "mlp.experts.39.up_proj",
        "mlp.experts.39.down_proj",
        "mlp.experts.40.gate_proj",
        "mlp.experts.40.up_proj",
        "mlp.experts.40.down_proj",
        "mlp.experts.41.gate_proj",
        "mlp.experts.41.up_proj",
        "mlp.experts.41.down_proj",
        "mlp.experts.42.gate_proj",
        "mlp.experts.42.up_proj",
        "mlp.experts.42.down_proj",
        "mlp.experts.43.gate_proj",
        "mlp.experts.43.up_proj",
        "mlp.experts.43.down_proj",
        "mlp.experts.44.gate_proj",
        "mlp.experts.44.up_proj",
        "mlp.experts.44.down_proj",
        "mlp.experts.45.gate_proj",
        "mlp.experts.45.up_proj",
        "mlp.experts.45.down_proj",
        "mlp.experts.46.gate_proj",
        "mlp.experts.46.up_proj",
        "mlp.experts.46.down_proj",
        "mlp.experts.47.gate_proj",
        "mlp.experts.47.up_proj",
        "mlp.experts.47.down_proj",
        "mlp.experts.48.gate_proj",
        "mlp.experts.48.up_proj",
        "mlp.experts.48.down_proj",
        "mlp.experts.49.gate_proj",
        "mlp.experts.49.up_proj",
        "mlp.experts.49.down_proj",
        "mlp.experts.50.gate_proj",
        "mlp.experts.50.up_proj",
        "mlp.experts.50.down_proj",
        "mlp.experts.51.gate_proj",
        "mlp.experts.51.up_proj",
        "mlp.experts.51.down_proj",
        "mlp.experts.52.gate_proj",
        "mlp.experts.52.up_proj",
        "mlp.experts.52.down_proj",
        "mlp.experts.53.gate_proj",
        "mlp.experts.53.up_proj",
        "mlp.experts.53.down_proj",
        "mlp.experts.54.gate_proj",
        "mlp.experts.54.up_proj",
        "mlp.experts.54.down_proj",
        "mlp.experts.55.gate_proj",
        "mlp.experts.55.up_proj",
        "mlp.experts.55.down_proj",
        "mlp.experts.56.gate_proj",
        "mlp.experts.56.up_proj",
        "mlp.experts.56.down_proj",
        "mlp.experts.57.gate_proj",
        "mlp.experts.57.up_proj",
        "mlp.experts.57.down_proj",
        "mlp.experts.58.gate_proj",
        "mlp.experts.58.up_proj",
        "mlp.experts.58.down_proj",
        "mlp.experts.59.gate_proj",
        "mlp.experts.59.up_proj",
        "mlp.experts.59.down_proj",
        "mlp.experts.60.gate_proj",
        "mlp.experts.60.up_proj",
        "mlp.experts.60.down_proj",
        "mlp.experts.61.gate_proj",
        "mlp.experts.61.up_proj",
        "mlp.experts.61.down_proj",
        "mlp.experts.62.gate_proj",
        "mlp.experts.62.up_proj",
        "mlp.experts.62.down_proj",
        "mlp.experts.63.gate_proj",
        "mlp.experts.63.up_proj",
        "mlp.experts.63.down_proj",

        "mlp.shared_experts.gate_proj",
        "mlp.shared_experts.up_proj",
        "mlp.shared_experts.down_proj",
    ],
    
    r"model\.sam_model\.blocks\.\d+": (
        "attn.qkv",
        "attn.proj",
        "mlp.lin1",
        "mlp.lin2",
    ),
    
    r"model\.vision_model\.embeddings\.position_embedding": [],
    
    r"model\.vision_model\.transformer\.layers\.\d+": (
        "self_attn.qkv_proj",
        "self_attn.out_proj",
        "mlp.fc1",
        "mlp.fc2",
    ),
    
    r"model\.projector\.layers": []
}
Downloads last month
48
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mingyi456/DeepSeek-OCR-DF11

Quantized
(8)
this model