Co-rewarding-III-Llama-3.2-3B-Instruct-DAPO14k:

This repository is the Co-rewarding-III-Llama-3.2-3B-Instruct-DAPO14k model, a Llama-3.2-3B-Instruct model fine-tuned using the Co-rewarding-III method on the DAPO14k training set. The model was presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

For more details about the Co-rewarding framework and its implementation, you can refer to our GitHub Repository: https://github.com/tmlr-group/Co-rewarding.

Citation

If you use our datasets or models, please cite our paper!

@article{zhang2025coreward,
      title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models}, 
      author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
      journal={arXiv preprint arXiv:2508.00410},
      year={2025},
}
Downloads last month
16
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Co-rewarding-III-Llama-3.2-3B-Instruct-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/Co-rewarding-III-Llama-3.2-3B-Instruct-DAPO14k