Instructions to use DeepBeepMeep/Wan2.2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use DeepBeepMeep/Wan2.2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("DeepBeepMeep/Wan2.2", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Diffusion Single File
How to use DeepBeepMeep/Wan2.2 with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Support for Wan2.2 I2V NVFP4 models.
#2
by atyadhbhut - opened
I was able to change your Wan2.2 I2V Enhanced Lightning version models to NVFP4 using a CPU based script provided by Gemini but when i use it on Wan2GP, your application throws size mismatch error & won't load the models as you have not supported it. Other than my model, i have found redditors using this model : https://huggingface.co/GitMylo/Wan_2.2_nvfp4/tree/main , which is also a NVFP4 model. So please consider adding Wan2.2 I2V NVFP4 model support.