AI & ML interests

None defined yet.

prithivMLmodsย 
posted an update 2 days ago
view post
Post
1865
FireRed-Image-Edit-1.0 (Rapid) Fast Experimental Demo is Out! ๐Ÿš€๐Ÿค—

Demo: prithivMLmods/FireRed-Image-Edit-1.0-Fast

-> Paired the EditPlusPipeline with the Diffusers-compatible transformer weights of Rapid AIO from Qwen-Image-Edit. (experimental)
-> This fusion delivers more accurate instruction following, higher image quality, and consistent visual coherence @ 4-step fast inference.
-> Better maintains text styles with high fidelity, along with high-quality old photo restoration, enhancement, and best-in-class virtual try-on.

prithivMLmodsย 
posted an update 7 days ago
prithivMLmodsย 
posted an update 11 days ago
view post
Post
2549
prithivMLmodsย 
posted an update 18 days ago
view post
Post
2920
Introducing FLUX.2-Klein-LoRA-Studio, a demo for image editing using specialized LoRA adapters built for the FLUX.2-Klein-Distilled model. It features an edit-style gallery for multi-style image editing, including de-light, face swap, mannequin, and more. Try the demo below.

๐Ÿค—Demo: prithivMLmods/FLUX.2-Klein-LoRA-Studio
๐Ÿค—Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
๐Ÿค—GitHub: https://github.com/PRITHIVSAKTHIUR/FLUX.2-Klein-LoRA-Studio

To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update 22 days ago
view post
Post
858
GLM OCR, a multimodal OCR model for complex document understanding, built on the GLM-V encoderโ€“decoder architecture. It delivers high accuracy and strong generalization with a blazing-fast inference pipeline. The demo is live . Try it now. ๐Ÿค—๐Ÿš€

โœจ Demo: prithivMLmods/GLM-OCR-Demo
โœจ Multimodal Implementations: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
โœจ GitHub: https://github.com/PRITHIVSAKTHIUR/GLM-OCR-Demo
prithivMLmodsย 
posted an update 23 days ago
view post
Post
2161
Introducing the Qwen-Image-Edit-3D-Lighting-Control app, featuring 8ร— horizontal and 3ร— elevational lighting positions for precise 3D lighting control. It enables studio-level lighting using fast Qwen Image Edit fast inference, paired with Multi-Angle-Lighting adapters. ๐Ÿ”ฆ

๐Ÿ”ฅ Space: prithivMLmods/Qwen-Image-Edit-3D-Lighting-Control
โœ… Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
๐Ÿ“‚ GitHub: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-3D-Lighting-Control
prithivMLmodsย 
posted an update 29 days ago
view post
Post
3643
Daggr UI version of the Qwen3-TTS demo.๐Ÿ”ฅ
(custom voice, voice design, qwen3-asr and voice cloning) nodes.
No remote spaces used for API inference; all functions run in-app fn.
Powered by t4-m and built with daggr@0.5.2 and gradio@6.

๐Ÿ‘‰Demo: prithivMLmods/Qwen3-TTS-Daggr-UI
โญGithub: https://github.com/PRITHIVSAKTHIUR/Qwen3-TTS-Daggr-UI
  • 1 reply
ยท
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
2700
Qwen-Image-Edit-Object-Manipulator Space is now featured in Hugging Face Space of the Week. It enables object manipulation such as extracting objects, adding designs, and removing objects or designs from the red highlighted area using specialized adapters.

๐Ÿ”ฅDo enjoy the demo! ~ prithivMLmods/Qwen-Image-Edit-Object-Manipulator

Collections:
๐ŸงจAdapters-1: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-exps
๐ŸงจAdapters-2: https://huggingface.co/collections/prithivMLmods/qie-jan-23-26
๐ŸงจAdapters-3: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-object-manipulator

โญGithub: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-Object-Manipulator

To learn more, visit the app page or the respective model pages.
  • 1 reply
ยท
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
3049
Introducing QIE-2511-Zoom-Master for highlight-guided area zoom-in, enabling lossless zooming within a drawn square area, and QIE-2511-Object-Remover-v2 for precise object or highlight-guided area cleanup. These experimental adapters are trained based on QIE-2511. Find the adapters below.

๐Ÿ•น๏ธQIE-2511-Zoom-Master : prithivMLmods/QIE-2511-Zoom-Master
๐Ÿ•น๏ธQIE-2511-Object-Remover-v2: prithivMLmods/QIE-2511-Object-Remover-v2

๐Ÿค—Demo: prithivMLmods/Qwen-Image-Edit-Object-Manipulator

๐Ÿ“‚Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-exps

To learn more, visit the app page or the respective model pages.
  • 2 replies
ยท
prithivMLmodsย 
posted an update about 2 months ago
view post
Post
5580
LTX-2 Camera-Control LoRA demo with dolly-in/out and dolly-left/right is now available on Hugging Face, paired with ltx-2-19b-distilled-lora for fast inference. It also includes dynamic GPU duration adjustments for long video generations. Click the related Space links below.

๐Ÿค—Try it now on : prithivMLmods/LTX-2-LoRAs-Camera-Control-Dolly
โญGithub: https://github.com/PRITHIVSAKTHIUR/LTX-2-LoRAs-Camera-Control-Dolly
๐Ÿ•น๏ธCollection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To learn more, visit the app page or the respective model pages.
  • 2 replies
ยท
prithivMLmodsย 
posted an update about 2 months ago
view post
Post
2478
Dropping Image Edit (Object Manipulator): Add or remove specified objects/designs, with flexible support for both single-image and multi-image modes.

๐Ÿค— Demo: prithivMLmods/Qwen-Image-Edit-Object-Manipulator

Qwen-Image-Edit-2511-Object-Remover is an adapter (LoRA) developed for Qwenโ€™s Qwen-Image-Edit-2511 image-to-image model. It is specifically designed for precise object removal from images.

โญ Model: prithivMLmods/Qwen-Image-Edit-2511-Object-Remover

Qwen-Image-Edit-2511-Object-Adder is an adapter (LoRA) developed for Qwenโ€™s Qwen-Image-Edit-2511 image-to-image model. It is specifically designed for precise object addition to images.

โญ Model: prithivMLmods/Qwen-Image-Edit-2511-Object-Adder

๐Ÿ•น๏ธ Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-object-manipulator
๐Ÿ•น๏ธ github: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-Object-Manipulator

To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update 2 months ago
view post
Post
4227
Update: TRELLIS.2 (Text to 3D, Image to 3D) Gradio with Rerun Embedded demo with improved visualization of the 3D model previewer is now available on Hugging Face. Generate assets and view them in the 3D viewer, powered and streamlined with Microsoftโ€™s TRELLIS.2 and Tongyi-MAIโ€™s Z-Image-Turbo models.

๐Ÿค— TRELLIS.2 (Demo): prithivMLmods/TRELLIS.2-Text-to-3D
๐Ÿ•น๏ธ GitHub: https://github.com/PRITHIVSAKTHIUR/TRELLIS.2-Text-to-3D-RERUN
๐Ÿ•น๏ธ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
prithivMLmodsย 
posted an update 2 months ago
view post
Post
4280
Introducing the Qwen-Image-Edit-2511-LoRAs-Fast demo, featuring image property comparison and contrast, built on top of Gradio and the combined Rerun SDK. It supports single and multi-image edits with existing LoRAs that are lazily loaded. (Note: This is still an experimental Space for Qwen-Image-Edit-2511.)

โญ Space Demo: prithivMLmods/Qwen-Image-Edit-2511-LoRAs-Fast
โญ GitHub: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-2511-LoRAs-Fast-Multi-Image-Rerun
โญ Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To know more about it, visit the app page or the respective model page!
  • 2 replies
ยท
prithivMLmodsย 
posted an update 2 months ago
view post
Post
3746
Introducing demos for new SOTA models from AI2: SAGE-MM (Smart Any-Horizon Agents for Long-Video Reasoning) and Molmo-2, an open vision-language model that supports multi-image (QA and pointing) and video (QA, pointing, and tracking). The respective demo-related collections are listed below. ๐ŸŽƒ๐Ÿ”ฅ

โœจ SAGE-MM [Video-Reasoning]: prithivMLmods/SAGE-MM-Video-Reasoning
โœจ Molmo2 [Demo]: prithivMLmods/Molmo2-HF-Demo

๐ŸŽƒ GitHub[SAGE-MM]: https://github.com/PRITHIVSAKTHIUR/SAGE-MM-Video-Reasoning
๐ŸŽƒ GitHub[Molmo2]: https://github.com/PRITHIVSAKTHIUR/Molmo2-HF-Demo
๐ŸŽƒ Multimodal Implementations: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
  • 1 reply
ยท
prithivMLmodsย 
posted an update 2 months ago
view post
Post
2124
Introducing TRELLIS.2 Text-to-3D. The demo for the TRELLIS.2-4B (Image-to-3D) model is streamlined with the Z-Image Turbo image generation model to enable Text-to-3D functionality. There is no need for input assets, making a small leap forward for ideation. Optionally, it also includes default support for Image-to-3D inference using direct image assets. Find the demo and related collections below... ๐Ÿค—๐Ÿ”ฅ

โœจ TRELLIS.2-Text-to-3D [Demo]: prithivMLmods/TRELLIS.2-Text-to-3D
โœจ Multimodal Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
โœจ Github: https://github.com/PRITHIVSAKTHIUR/TRELLIS.2-Text-to-3D

To know more about it, visit the app page or the respective model page!
prithivMLmodsย 
posted an update 2 months ago
view post
Post
2053
Demo for Molmo2 on Hugging Face is live now, including Single/Multi-Image VQA, Visual Pointing/Grounding, Video VQA, and Video Point Tracking. Find the demo and related collections below. ๐Ÿ”ฅ๐Ÿค—

โ— Molmo2 HF Demo๐Ÿ–ฅ๏ธ: prithivMLmods/Molmo2-HF-Demo
โ— Model Collection: https://huggingface.co/collections/allenai/molmo2
โ— Related Multimodal Space Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
prithivMLmodsย 
posted an update 2 months ago
view post
Post
5586
Introducing the Z Image Turbo LoRA DLC App, a gallery space for plug-and-play Z-Image-Turbo LoRAs. It features a curated collection of impressive LoRAs for generating high-quality images. By default, it runs on the base model. Simply choose a LoRA, type your prompt, and generate images. You can find the app and more details below. ๐Ÿค—๐Ÿงช

โ— Space [Demo]: prithivMLmods/Z-Image-Turbo-LoRA-DLC
โ— Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
โ— Check the list of Z-Image LoRA's: https://huggingface.co/models?other=base_model:adapter:Tongyi-MAI/Z-Image-Turbo
โ— Github: https://github.com/PRITHIVSAKTHIUR/Z-Image-Turbo-LoRA-DLC

Other related image gen spaces:-

โ— FLUX-LoRA-DLC2: prithivMLmods/FLUX-LoRA-DLC2
โ— FLUX-LoRA-DLC: prithivMLmods/FLUX-LoRA-DLC
โ— Qwen-Image-LoRA-DLC: prithivMLmods/Qwen-Image-LoRA-DLC
โ— Qwen-Image-Edit-2509-LoRAs-Fast: prithivMLmods/Qwen-Image-Edit-2509-LoRAs-Fast
โ— Qwen-Image-Edit-2509-LoRAs-Fast-Fusion: prithivMLmods/Qwen-Image-Edit-2509-LoRAs-Fast-Fusion

& more...

To know more about it, visit the app page or the respective model page!
  • 2 replies
ยท
prithivMLmodsย 
posted an update 3 months ago
view post
Post
2758
Introducing the D.Markdown Experimental Models, Proxima and Epsilon OCR models, built on top of Qwen3-VL and Qwen2.5-VL respectively. Proxima is optimized for Markdown generation and is capable of embedding inline programming code snippets and generating rich nodes such as HTML, XML, JSON, and YAML. Epsilon is optimized for reconstructing complex layouts including tables, forms, and mathematical content. ๐ŸŒŒโœจ

โ— proxima-ocr-d.markdown-post3.0.l: prithivMLmods/proxima-ocr-d.markdown-post3.0.l
โ— epsilon-ocr-d.markdown-post3.0.m: prithivMLmods/epsilon-ocr-d.markdown-post3.0.m
โ— proxima-ocr-d.markdown-post3.0.l-gguf: prithivMLmods/proxima-ocr-d.markdown-post3.0.l-GGUF
โ— epsilon-ocr-d.markdown-post3.0.m-gguf: prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF

โ— Collection: https://huggingface.co/collections/prithivMLmods/dynamic-markdowns
โ— Multimodal Apps: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

๐Ÿ‘‰ These models are stage progression models, and currently they may contain artifacts.

To know more about it, visit the app page or the respective model page!
prithivMLmodsย 
posted an update 3 months ago
view post
Post
1156
Try CUA GUI Operator ๐Ÿ–ฅ๏ธ Space, the demo of some interesting multimodal ultra-compact Computer Use Agent (CUA) models in a single app, including Fara-7B, UI-TARS-1.5-7B, and Holo models, to perform GUI localization tasks.

โ— CUA-GUI-Operator [Demo]: prithivMLmods/CUA-GUI-Operator
โ— Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

Other related multimodal spaces

โ— Qwen3-VL: prithivMLmods/Qwen3-VL-HF-Demo
โ— Multimodal-VLM-v1.0: prithivMLmods/Multimodal-VLM-v1.0
โ— Vision-to-VibeVoice-en: prithivMLmods/Vision-to-VibeVoice-en

I have planned to add Chrome sandboxes to streamline it and turn it into a browser based CUA multimodal tool, which will be added to the same space soon.

To know more about it, visit the app page or the respective model page!
  • 1 reply
ยท
prithivMLmodsย 
posted an update 3 months ago
view post
Post
3590
One speech model with seven voices, streamlined with multimodal capabilities for vision tasks. Performs vision(image-text) to audio inference with Qwen2.5-VL + VibeVoice-Realtime-0.5B. Vision to VibeVoice (EN) - The demo is live. ๐Ÿ—ฃ๏ธ๐Ÿ”ฅ

๐Ÿค— Vision-to-VibeVoice-en [Demo]: prithivMLmods/Vision-to-VibeVoice-en
โœจ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
โœจ Speech [VibeVoice-Realtime-0.5B]: microsoft/VibeVoice-Realtime-0.5B
โœจ Vision [Qwen2.5-VL]: Qwen/Qwen2.5-VL-7B-Instruct

To know more about it, visit the app page or the respective model page!
ยท