EgoExoBench_MCQ / README.md
ayiyayi
upload processed_videos
1323000
metadata
license: mit
task_categories:
  - question-answering
size_categories:
  - 10M<n<100M

🧠 EgoExoBench: A Cross-Perspective Video Understanding Benchmark

Dataset Summary

EgoExoBench is a benchmark designed to evaluate cross-perspective understanding capabilities of multimodal large models (MLLMs).
It contains synchronized and asynchronous egocentric (first-person) and exocentric (third-person) video pairs, along with multiple-choice questions that assess semantic alignment, viewpoint association, and temporal reasoning between the two perspectives.

Features

Each sample contains:

  • Question: A natural-language question testing cross-perspective reasoning.
  • Options: Multiple-choice answers (A/B/C/D).
  • Answer: Correct answer label.
  • Videos: Egocentric and Exocentric videos.

Evaluation Metric

Accuracy (%)

Data Splits

Split #Samples
Test 7,330

Example Usage

from datasets import load_dataset

dataset = load_dataset("YourUsername/EgoExoBench")

print(dataset["test"][0])

Citation

If you use EgoExoBench in your research, please cite:

@misc{he2025egoexobench,
      title={EgoExoBench: A Benchmark for First- and Third-person View Video Understanding in MLLMs}, 
      author={Yuping He and Yifei Huang and Guo Chen and Baoqi Pei and Jilan Xu and Tong Lu and Jiangmiao Pang},
      year={2025},
      eprint={2507.18342},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.18342}
}