David Piorkowski, Inge Vejsbjerg, et al.
PACM HCI
We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices.
David Piorkowski, Inge Vejsbjerg, et al.
PACM HCI
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Masataro Asai, Stephen Wissow
AAAI 2026
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025