Update README.md
Browse files
README.md
CHANGED
|
@@ -2,4 +2,36 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen2.5-14B-Instruct
|
| 5 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen2.5-14B-Instruct
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Model Description
|
| 8 |
+
|
| 9 |
+
The **RL-MemAgent-14B** is a part of the **MemAgent** framework, which enables Large Language Models (LLMs) to process arbitrarily long texts through end-to-end Reinforcement Learning without altering their core architecture.
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
## Usage
|
| 14 |
+
|
| 15 |
+
This model is ideal for tasks requiring the understanding and processing of very long documents, such as comprehensive question answering, summarizing extensive reports, or analyzing large codebases.
|
| 16 |
+
|
| 17 |
+
For detailed instructions on how to use, evaluate, and train models within the MemAgent framework, please refer to the main [MemAgent GitHub repository](https://github.com/BytedTsinghua-SIA/MemAgent).
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
## Links
|
| 21 |
+
|
| 22 |
+
* **Paper:** [https://arxiv.org/abs/2507.02259](https://arxiv.org/abs/2507.02259)
|
| 23 |
+
* **Blog:** [https://memagent-sialab.github.io/](https://memagent-sialab.github.io/)
|
| 24 |
+
* **GitHub:** [https://github.com/BytedTsinghua-SIA/MemAgent](https://github.com/BytedTsinghua-SIA/MemAgent)
|
| 25 |
+
|
| 26 |
+
## Citation
|
| 27 |
+
|
| 28 |
+
If you find this work useful, please consider citing our paper:
|
| 29 |
+
|
| 30 |
+
```bibtex
|
| 31 |
+
@article{yu2025memagent,
|
| 32 |
+
title={MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent},
|
| 33 |
+
author={Yu, Hongli and Chen, Tinghong and Feng, Jiangtao and Chen, Jiangjie and Dai, Weinan and Yu, Qiying and Zhang, Ya-Qin and Ma, Wei-Ying and Liu, Jingjing and Wang, Mingxuan and others},
|
| 34 |
+
journal={arXiv preprint arXiv:2507.02259},
|
| 35 |
+
year={2025}
|
| 36 |
+
}
|
| 37 |
+
```
|