Q3.6-27B-DS-v4-Flash-DA
Q3.6-27B-DS-v4-Flash-DA (Qwen3.6 DeepSeek Distilled-Abliterated) is a reasoning-focused model built on top of Qwen/Qwen3.6-27B through the prithivMLmods/Qwen3.6-27B-abliterated-rMAX base. The model is optimized for rich, detailed, and context-aware reasoning using DeepSeek V4 Flash distilled reasoning traces combined with advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving strong reasoning and instruction-following performance.
This model is intended strictly for research and learning purposes. Due to reduced internal refusal mechanisms, it may generate sensitive or unrestricted content. Users assume full responsibility for how the model is used. The authors and hosting platform disclaim any liability for generated outputs.
Note: This model is experimental and may generate artifacts.
Key Highlights
- DeepSeek V4 Flash Distillation: Fine-tuned using distilled reasoning traces derived from DeepSeek V4 Flash generations for enhanced reasoning depth and contextual understanding.
- Distilled-Abliterated (DA): Applies refusal direction analysis and ablation-based strategies to reduce internal refusal behaviors while maintaining reasoning quality.
- Qwen3.6 Backbone: Built on top of Qwen/Qwen3.6-27B via prithivMLmods/Qwen3.6-27B-abliterated-rMAX for strong instruction-following and reasoning performance.
- Instruction + Reasoning Fusion: Handles instruction-following and complex multi-step reasoning tasks seamlessly.
- High-Coherence Outputs: Maintains consistency across long generations with improved contextual grounding.
- 27B Scale Performance: Delivers high-capacity reasoning suitable for advanced research and complex tasks.
Datasets Used and Training Details
| Category | Details |
|---|---|
| Base Model | Qwen/Qwen3.6-27B |
| Intermediate Base | prithivMLmods/Qwen3.6-27B-abliterated-rMAX |
| Final Model Size | 27B Parameters |
| Training Type | Distillation + abliteration |
| Objective | Preserve reasoning quality while reducing refusal behaviors and improving instruction-following reliability |
| Reasoning Dataset | Jackrong/DeepSeek-V4-Distill-8000x (4000 random samples used) |
| Alignment / Evaluation Dataset | prithivMLmods/harm_bench |
| Training Pipeline | TRL (Transformer Reinforcement Learning) |
| Training Focus | Structured reasoning, long-chain thinking, robustness across diverse prompts |
Quick Start with Transformers
pip install transformers==5.8.0
# or latest
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch
model = Qwen3_5ForConditionalGeneration.from_pretrained(
"prithivMLmods/Q3.6-27B-DS-v4-Flash-DA",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained(
"prithivMLmods/Q3.6-27B-DS-v4-Flash-DA"
)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Generate a highly detailed caption of a futuristic city skyline at sunset."
}
],
}
]
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = processor(
text=[text],
padding=True,
return_tensors="pt"
).to("cuda")
generated_ids = model.generate(
**inputs,
max_new_tokens=512
)
generated_ids_trimmed = [
out_ids[len(in_ids):]
for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
- Reasoning & Chain-of-Thought Tasks: Deep multi-step reasoning powered by DeepSeek-distilled traces
- Instruction Following: Hybrid prompts requiring both instruction adherence and reasoning
- Red-Teaming & Alignment Research: Evaluating reduced-refusal systems and refusal direction analysis
- Local High-Performance Deployment: Multi-GPU or optimized inference setups
- Research on Abliteration: Studying the effects of ablation-based training on reasoning preservation
Limitations & Risks
Important Note: This model intentionally minimizes built-in safety refusals.
- Sensitive Content Risk: May produce unrestricted or controversial outputs
- User Responsibility: Requires careful and ethical usage
- High Compute Demand: Requires significant VRAM or optimized quantization for efficient inference
- Abliteration Trade-offs: Reduced refusal may impact safety alignment and output filtering
- Downloads last month
- 50
Model tree for prithivMLmods/Q3.6-27B-DS-v4-Flash-DA
Base model
Qwen/Qwen3.6-27B