Commit
·
d070d11
1
Parent(s):
8ce189e
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,4 +11,33 @@ tags:
|
|
| 11 |
pretty_name: jeebench
|
| 12 |
size_categories:
|
| 13 |
- n<1K
|
| 14 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
pretty_name: jeebench
|
| 12 |
size_categories:
|
| 13 |
- n<1K
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# JEEBench(EMNLP 2023)
|
| 17 |
+
Repository for the code and dataset for the paper: "Have LLMs Advanced Enough? A Harder Problem Solving Benchmark For Large Language Models" accepted in EMNLP 2023 as a Main conference paper.
|
| 18 |
+
https://aclanthology.org/2023.emnlp-main.468/
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## Citation
|
| 22 |
+
|
| 23 |
+
If you use our dataset in your research, please cite it using the following
|
| 24 |
+
```latex
|
| 25 |
+
@inproceedings{arora-etal-2023-llms,
|
| 26 |
+
title = "Have {LLM}s Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models",
|
| 27 |
+
author = "Arora, Daman and
|
| 28 |
+
Singh, Himanshu and
|
| 29 |
+
{Mausam}",
|
| 30 |
+
editor = "Bouamor, Houda and
|
| 31 |
+
Pino, Juan and
|
| 32 |
+
Bali, Kalika",
|
| 33 |
+
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
|
| 34 |
+
month = dec,
|
| 35 |
+
year = "2023",
|
| 36 |
+
address = "Singapore",
|
| 37 |
+
publisher = "Association for Computational Linguistics",
|
| 38 |
+
url = "https://aclanthology.org/2023.emnlp-main.468",
|
| 39 |
+
doi = "10.18653/v1/2023.emnlp-main.468",
|
| 40 |
+
pages = "7527--7543",
|
| 41 |
+
abstract = "The performance of large language models (LLMs) on existing reasoning benchmarks has significantly improved over the past years. In response, we present JEEBench, a considerably more challenging benchmark dataset for evaluating the problem solving abilities of LLMs. We curate 515 challenging pre-engineering mathematics, physics and chemistry problems from the highly competitive IIT JEE-Advanced exam. Long-horizon reasoning on top of deep in-domain knowledge is essential for solving problems in this benchmark. Our evaluation on various open-source and proprietary models reveals that the highest performance, even after using techniques like self-consistency, self-refinement and chain-of-thought prompting, is less than 40{\%}. The typical failure modes of GPT-4, the best model, are errors in algebraic manipulation, difficulty in grounding abstract concepts into mathematical equations accurately and failure in retrieving relevant domain-specific concepts. We also observe that by mere prompting, GPT-4 is unable to assess risk introduced by negative marking for incorrect answers. For this, we develop a post-hoc confidence-thresholding method over self-consistency, which enables effective response selection. We hope that our challenging benchmark will guide future re-search in problem-solving using LLMs.",
|
| 42 |
+
}
|
| 43 |
+
```
|