Assessing the True Comprehension Capabilities of LLMs for Multiple Choice Questions

View a PDF of the paper titled ANPMI: Assessing the True Comprehension Capabilities of LLMs for Multiple Choice Questions, by Gyeongje Cho and 1 other authors
View PDF
HTML (experimental)
Abstract:Multiple-choice benchmarks, consisting of various prompts and choices, are among the most widely used methods to assess a language model’s natural language understanding capability. Given a specific prompt, we typically compute $P(Choice|Prompt)$ to evaluate how likely a language model is to generate the correct choice compared to incorrect ones. However, we observe that performance measured using this approach reflects not only the model’s comprehension of the prompt but also its inherent biases for certain choices regardless of the prompt. This issue makes it challenging to accurately measure a model’s natural language understanding, as models may select the answer without fully understanding the prompt. To address this limitation, we propose a novel metric called ANPMI, which normalizes Pointwise Mutual Information (PMI) by $-log P(Choice)$. ANPMI provides a more accurate assessment of the model’s natural language understanding by ensuring that it is challenging to answer a question without properly understanding the prompt.
Submission history
From: Gyeongje Cho [view email]
[v1]
Wed, 26 Feb 2025 04:10:18 UTC (493 KB)
[v2]
Thu, 27 Feb 2025 08:11:40 UTC (493 KB)
2025-03-01 05:00:00