Datasets:
| language: | |
| - en | |
| license: mit | |
| size_categories: | |
| - 10K<n<100K | |
| task_categories: | |
| - visual-question-answering | |
| - image-text-to-text | |
| pretty_name: GeoExpand & GeoSynth | |
| tags: | |
| - mathematical-reasoning | |
| - geometry-problem-solving | |
| - multimodal-reasoning | |
| # GeoGeo: GeoExpand & GeoSynth | |
| This repository contains the **GeoExpand** and **GeoSynth** datasets, originally introduced in the paper [Enhancing the Geometric Problem-Solving Ability of Multimodal LLMs via Symbolic-Neural Integration](https://arxiv.org/pdf/2504.12773). | |
| The datasets are designed to enhance and evaluate the geometric problem-solving capabilities of multimodal large language models. | |
| GitHub Repository: [ycpNotFound/GeoGen](https://github.com/ycpNotFound/GeoGen) | |
| These datasets are also referenced and contextualized in the survey paper [A Survey of Deep Learning for Geometry Problem Solving](https://huggingface.co/papers/2507.11936), which provides a comprehensive overview of the field. The survey's reading list is maintained on its GitHub repository: [majianz/gps-survey](https://github.com/majianz/gps-survey). | |
| - **GeoExpand** includes 45,526 Q&A samples, generated from 4849 images in total of Geometry3K and PGPS9K. | |
| - **GeoSynth** includes 62,868 Q&A samples, with one diagram for one Q&A each. |