Michelle Brachman, Christopher Bygrave, et al.
AAAI 2022
In the advent of democratized usage of large language models (LLMs), there is a growing desire to systematize LLM prompt creation and selection processes beyond iterative trial-and-error. Prior works majorly focus on searching the space of prompts without accounting for relations between prompt variations. Here we propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements as well as a simple method to select an effective prompt for a given use-case. We evaluate our approach with open-source LLMs of different sizes on several different tasks.
Michelle Brachman, Christopher Bygrave, et al.
AAAI 2022
Massimiliano Pronesti, Joao Bettencourt-Silva, et al.
ACL 2025
Bingsheng Yao, Dakuo Wang, et al.
ACL 2022
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020