File size: 10,190 Bytes
0eb3766 de55fd0 61e43fc de55fd0 c808954 de55fd0 c808954 de55fd0 801bd7a de55fd0 1395cd6 0eb3766 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
<<<<<<< HEAD
---
title: README
emoji: 🌍
colorFrom: blue
colorTo: blue
sdk: static
pinned: false
license: apache-2.0
---
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
<h1 align="center"><b>On Path to Multimodal Generalist: General-Level and General-Bench</b></h1>
<p align="center">
<a href="https://generalist.top/">[📖 Project]</a>
<a href="https://generalist.top/leaderboard">[🏆 Leaderboard]</a>
<a href="https://arxiv.org/abs/2505.04620">[📄 Paper]</a>
<a href="https://huggingface.co/papers/2505.04620">[🤗 Paper-HF]</a>
<a href="https://huggingface.co/General-Level/General-Bench-Closeset">[🤗 Dataset-HF (Close-Set)]</a>
<a href="https://huggingface.co/General-Level/General-Bench-Openset">[🤗 Dataset-HF (Open-Set)]</a>
<a href="https://github.com/path2generalist/General-Level">[📝 Github]</a>
</p>
---
<h1 align="center" style="color: red"><b>General-Level Scorer</b></h1>
---
</div>
<h1 align="center" style="color:#F27E7E"><em>
Does higher performance across tasks indicate a stronger capability of MLLM, and closer to AGI?
<br>
NO! But <b style="color:red">synergy</b> does.
</em></h1>
Most current MLLMs predominantly build on the language intelligence of LLMs to simulate the indirect intelligence of multimodality, which is merely extending language intelligence to aid multimodal understanding. While LLMs (e.g., ChatGPT) have already demonstrated such synergy in NLP, reflecting language intelligence, unfortunately, the vast majority of MLLMs do not really achieve it across modalities and tasks.
We argue that the key to advancing towards AGI lies in the synergy effect—a capability that enables knowledge learned in one modality or task to generalize and enhance mastery in other modalities or tasks, fostering mutual improvement across different modalities and tasks through interconnected learning.
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/-Asn68kJGjgqbGqZMrk4E.png' width=950px>
</div>
---
# 🏆 Overall Leaderboard<a name="leaderboard" />
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/s1Q7t6Nmtnmv3bSvkquT0.png' width=1200px>
</div>
------
# 🚀 General-Level <a name="level" />
**A 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents).
The core is the use of <b style="color:red">synergy</b> as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.**
General-Level evaluates generalists based on the levels and strengths of the synergy they preserve. Specifically, we define three scopes of synergy, ranked from low to high: no synergy, task-level synergy (‘task-task’), paradigm-level synergy (‘comprehension-generation’), and cross-modal total synergy (‘modality-modality’), as illustrated here:
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/lnvh5Qri9O23uk3BYiedX.jpeg' width=1000px>
</div>
Achieving these levels of synergy becomes progressively more challenging, corresponding to higher degrees of general intelligence. Assume we have a benchmark of various modalities and tasks, where we can categorize tasks under these modalities into the Comprehension group and the Generation group, as well as the language (i.e., NLP) group, as illustrated here:
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/IDTBZ6RgzjO1cRhHPjS8W.png' width=900px>
</div>
Let’s denote the number of datasets or tasks within the Comprehension task group by M; the number within the Generation task group by N; and the number of NLP tasks by T.
Now, we demonstrate the specific definition and calculation of each level:
<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/BPqs-3UODQWvjFzvZYkI4.png' width=1000px>
</div>
------
<h1 style="font-weight: bold; text-decoration: none;"> ⚠️ Scoring Relaxation <a name="relaxation" /> </a> </h1>
A central aspect of our General-Level framework lies in **how synergy effects are computed**. According to the standard understanding of the synergy concept,
e.g., the performance of a generalist model on joint modeling of tasks A and B (e.g., Pθ(y|A,B)) should exceed its performance when modeling task A alone (e.g., Pθ(y|A))
or task B alone (e.g., Pθ(y|B)). However, adopting this approach poses a significant challenge that hinders the measurement of synergy: there is no feasible way to
establish two independent distributions, Pθ(y|A) and Pθ(y|B), and a joint distribution Pθ(y|A,B).
This limitation arises because a given generalist model has already undergone extensive pre-training and fine-tuning, where tasks A and B have likely been jointly modeled.
It is impractical to retrain such a generalist to isolate the learning and modeling of tasks A or B independently in order to derive these distributions.
Otherwise, such an approach would result in excessive redundant computation and inference on the benchmark data.
To simplify and relax the evaluation of synergy, we introduce a key assumption in the scoring algorithm:
> Theoretically, we posit that the stronger a model's synergy capability, the more likely it is to surpass the task performance of SoTA specialists when synergy is effectively employed.
> Then, we can simplify the synergy measurement as: if a generalist outperforms a SoTA specialist in a specific task, we consider it as evidence of a synergy effect, i.e.,
> leveraging the knowledge learned from other tasks or modalities to enhance its performance in the targeted task.
By making this assumption, we avoid the need for direct pairwise measurements between `task-task', `comprehension-generation', or `modality-modality', which would otherwise require complex and computationally intensive algorithms.
------
<h1 style="font-weight: bold; text-decoration: none;"> 📌 Citation <a name="cite" /> </a> </h1>
If you find our benchmark useful in your research, please kindly consider citing us:
```bibtex
@articles{fei2025pathmultimodalgeneralistgenerallevel,
title={On Path to Multimodal Generalist: General-Level and General-Bench},
author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
eprint={2505.04620},
archivePrefix={arXiv},
primaryClass={cs.CV}
url={https://arxiv.org/abs/2505.04620},
}
```
=======
# GenBench 评分系统 - 用户使用说明
本系统用于评估大模型在 General-Bench 多模态任务集上的表现,可完成预测、评分和最终得分计算。
## 环境准备
- Python 3.9 及以上
- 推荐提前安装依赖(如 pandas, numpy, openpyxl 等)
- Video Generation评测,需要按照video_generation_evaluation/README.md中的步骤安装依赖
- Video Comprehension评测,需要按照[sa2va](https://github.com/magic-research/Sa2VA)中的README.md中的步骤安装依赖。
## 数据集下载
- **Open Set(公开数据集)**:请从 [HuggingFace General-Bench-Openset](https://huggingface.co/datasets/General-Level/General-Bench-Openset) 下载全部数据,解压后放入 `General-Bench-Openset/` 目录。
- **Close Set(私有数据集)**:请从 [HuggingFace General-Bench-Closeset](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) 下载全部数据,解压后放入 `General-Bench-Closeset/` 目录。
## 一键运行
请直接运行主脚本 `run.sh`,即可完成全部流程:
```bash
bash run.sh
```
该命令将依次完成:
1. 生成各模态预测结果
2. 计算各任务得分
3. 计算最终 Level 得分
## 分步运行(可选)
如只需运行部分步骤,可使用 `--step` 参数:
- 只运行第1步(生成预测):
```bash
bash run.sh --step 1
```
- 只运行第1、2步:
```bash
bash run.sh --step 12
```
- 只运行第2、3步:
```bash
bash run.sh --step 23
```
- 不加参数默认全部执行(等价于 `--step 123`)
- 步骤1:生成预测结果prediction.json,存在每一个数据集的annotation.json同级目录下
- 步骤2:计算每个任务的得分,存在outcome/{model_name}_result.xlsx中
- 步骤3:计算相关模型的Level得分
> **注意:**
> - 使用 **Close Set(私有数据集)** 时,只需运行 step1(即 `bash run.sh --step 1`),并将生成的 prediction.json 提交到系统。
> - 使用 **Open Set(公开数据集)** 时,需依次运行 step1、step2、step3(即 `bash run.sh --step 123`),完成全部评测流程。
## 结果查看
- 预测结果(prediction.json)会输出到每个任务对应的数据集文件夹下,与 annotation.json 同级。
- 评分结果(如 Qwen2.5-7B-Instruct_result.xlsx)会输出到 outcome/ 目录。
- 最终 Level 得分会直接在终端打印输出。
## 目录说明
- `General-Bench-Openset/`:公开数据集目录
- `General-Bench-Closeset/`:私有数据集目录
- `outcome/`:输出结果目录
- `references/`:参考模板目录
- `run.sh`:主运行脚本(推荐用户只用此脚本)
## 常见问题
- 如遇依赖缺失,请根据报错信息安装相应 Python 包。
- 如需自定义模型或数据路径,可编辑 `run.sh` 脚本中的相关变量。
---
如需进一步帮助,请联系系统维护者或查阅详细开发文档。
>>>>>>> 6f59817 (submit NLP Video Audio)
|