Papers
arxiv:2602.20629

QEDBENCH: Quantifying the Alignment Gap in Automated Evaluation of University-Level Mathematical Proofs

Published on Feb 24
· Submitted by
Quanquan Liu
on Mar 4
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Standard LLM-based evaluation protocols show systematic alignment gaps in mathematical proof assessment, with some models exhibiting significant score inflation and others demonstrating degraded performance in discrete mathematics domains.

AI-generated summary

As Large Language Models (LLMs) saturate elementary benchmarks, the research frontier has shifted from generation to the reliability of automated evaluation. We demonstrate that standard "LLM-as-a-Judge" protocols suffer from a systematic Alignment Gap when applied to upper-undergraduate to early graduate level mathematics. To quantify this, we introduce QEDBench, the first large-scale dual-rubric alignment benchmark to systematically measure alignment with human experts on university-level math proofs by contrasting course-specific rubrics against expert common knowledge criteria. By deploying a dual-evaluation matrix (7 judges x 5 solvers) against 1,000+ hours of human evaluation, we reveal that certain frontier evaluators like Claude Opus 4.5, DeepSeek-V3, Qwen 2.5 Max, and Llama 4 Maverick exhibit significant positive bias (up to +0.18, +0.20, +0.30, +0.36 mean score inflation, respectively). Furthermore, we uncover a critical reasoning gap in the discrete domain: while Gemini 3.0 Pro achieves state-of-the-art performance (0.91 average human evaluation score), other reasoning models like GPT-5 Pro and Claude Sonnet 4.5 see their performance significantly degrade in discrete domains. Specifically, their average human evaluation scores drop to 0.72 and 0.63 in Discrete Math, and to 0.74 and 0.50 in Graph Theory. In addition to these research results, we also release QEDBench as a public benchmark for evaluating and improving AI judges. Our benchmark is publicly published at https://github.com/qqliu/Yale-QEDBench.

Community

Paper author Paper submitter

QEDBENCH: Quantifying the Alignment Gap in Automated Evaluation of University-Level Mathematical Proofs

TL;DR: As LLMs max out elementary math benchmarks, the research frontier is shifting from solving math to reliably evaluating it. This paper introduces QEDBench, a large-scale benchmark demonstrating that current frontier "LLM-as-a-Judge" models suffer from severe score inflation and struggle to align with human evaluators at the university (upper undergraduate to early graduate) level.

🔑 Key Highlights:

  • The Benchmark: QEDBench is the first large-scale, dual-rubric alignment benchmark for upper-undergraduate to early graduate-level math proofs. It contrasts course-specific rubrics against expert common-knowledge criteria, backed by over 1,000+ hours of PhD-level human evaluation.

  • Widespread Score Inflation: When acting as judges (evaluated across a 7-judge × 5-solver matrix), many frontier models exhibit significant positive bias. Evaluators like Claude Opus 4.5, DeepSeek-V3, Qwen 2.5 Max, and Llama 4 Maverick are overly lenient, inflating mean scores by up to +0.18, +0.20, +0.30, and +0.36, respectively.

  • The Discrete Math Blindspot: While Gemini 3.0 Pro achieves state-of-the-art human evaluation scores (0.91 average score), other heavyweights like GPT-5 Pro and Claude Sonnet 4.5 see their performance plummet in discrete domains. Their human evaluation scores drop to 0.72 / 0.63 in Discrete Math and 0.74 / 0.50 in Graph Theory.

  • Open for the Community: The full dataset, dual-rubric system, and evaluation logs have been publicly released to help researchers build better, more rigorous AI math proof evaluators.

💡 Why it matters:

We increasingly rely on automated LLM pipelines (like LLM-as-a-Judge and reward models) to verify complex reasoning. This paper proves we cannot yet blindly trust them to grade advanced mathematics, as models often fail to align with human proof evaluators. QEDBench exposes the false confidence of current frontier models and highlights discrete math as a critical weakness for the next generation of AI solvers.

Resources:
🤗 HF Dataset: qqggez/QEDBench
🐙 GitHub: qqliu/Yale-QEDBench
🔗 Webpage: https://quanquancliu.com/QEDBench/index.html

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.20629 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.20629 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.