Model Card for SvS-32B (from Qwen2.5-32B-Instruct)
[π Website] β’ [π€ Dataset] β’ [π€ Models] β’ [π Paper] β’ [π± GitHub] β’ [π¦ Twitter] β’ [π Rednote]
The official model checkpoints for SvS. The SvS model is trained on the DAPO-Math-17k dataset along with 8k open-ended problems from DeepMath-103k (included in this repository as DAPO_17k_DeepMath_8k.parquet).
Inference
We recommend using our official inference template, which is inspired by the chat template of the Qwen2.5-Math series.
model_name = "RLVR-SvS/SvS-Qwen-32B"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Cite Us
If you find the model helpful, please consider citing our paper:
@misc{liang2025pass1selfplayvariationalproblem,
title={Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR},
author={Xiao Liang and Zhongzhi Li and Yeyun Gong and Yelong Shen and Ying Nian Wu and Zhijiang Guo and Weizhu Chen},
year={2025},
eprint={2508.14029},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.14029},
}
- Downloads last month
- 5