--- task_categories: - video-text-to-text license: mit dataset_info: features: - name: question dtype: string - name: index2ans struct: - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: answer dtype: string - name: video_path dtype: string - name: category dtype: string - name: subcategory dtype: string splits: - name: test num_bytes: 400189 num_examples: 1214 download_size: 105256 dataset_size: 400189 configs: - config_name: default data_files: - split: test path: data/test-* --- # Physical AI Bench - Understanding PAI-Bench (Physical AI Bench) is a comprehensive benchmark designed to evaluate physical AI generation and understanding capabilities across various real-world scenarios. This particular dataset, **PAI-Bench-U**, focuses specifically on **Video Understanding** tasks, comprising 2,808 real-world cases with task-aligned metrics. - **Paper:** [PAI-Bench: A Comprehensive Benchmark For Physical AI](https://huggingface.co/papers/2512.01989) - **Code:** [GitHub Repository](https://github.com/SHI-Labs/physical-ai-bench) ## Citation If you use Physical AI Bench in your research, please cite: ```bibtex @misc{zhou2025paibenchcomprehensivebenchmarkphysical, title={PAI-Bench: A Comprehensive Benchmark For Physical AI}, author={Fengzhe Zhou and Jiannan Huang and Jialuo Li and Deva Ramanan and Humphrey Shi},\ year={2025},\ eprint={2512.01989},\ archivePrefix={arXiv},\ primaryClass={cs.CV},\ url={https://arxiv.org/abs/2512.01989}, \ } ```