Conference paper

Multi-Armed Bandits Meet Large Language Models

Abstract

Bandit algorithms and Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, each ad- dressing distinct yet complementary challenges in decision- making and natural language processing. This survey explores the synergistic potential between these two fields, highlighting how bandit algorithms can enhance the performance of LLMs and how LLMs, in turn, can provide novel insights for improv- ing bandit-based decision-making. We first examine the role of bandit algorithms in optimizing LLM fine-tuning, prompt engineering, and adaptive response generation, focusing on their ability to balance exploration and exploitation in large- scale learning tasks. Subsequently, we explore how LLMs can augment bandit algorithms through advanced contextual under- standing, dynamic adaptation, and improved policy selection using natural language reasoning. By providing a comprehen- sive review of existing research and identifying key challenges and opportunities, this survey aims to bridge the gap between bandit algorithms and LLMs, paving the way for innovative applications and interdisciplinary research in AI.