
🧪 Part of an Experiment
This model is meant to investigate the effects of changing LoRA rank on the same tune.
Learning Rate was also increased to 2e-5 from 8e-6
Find v1 here.
Dumpling-Qwen2.5-7B-1k-r64-2e-5
nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on:
Method
QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=64,
lora_alpha=64,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)