Fengjie Wang, Xuye Liu, et al.
CHI 2023
The capabilities of Large Language Models (LLMs) have greatly improved, showing strong performance in many Natural Language Processing (NLP) tasks. Among these, Programmatic Text Understanding is an important task, which is crucial for the interpretation and generation of structured and executable text. However, current approaches such as expert role-playing prompting and Chain-of-Thought (CoT) prompting for LLMs often fall short in capturing the intricate complexities inherent in programmatic text. To address this limitation, we propose an Iterative Instruction Refinement prompting (IIRP) technique, which iteratively processes programmatic text line by line to enhance comprehension and execution accuracy. Our approach is evaluated across several LLMs including OpenAI's GPT-3.5, DeepSeek, and Qwen-Coder. The results on our evaluation dataset for Programmatic Text Understanding demonstrate that IIRP improves accuracy by an average of 7.22%, with the highest improvement reaching 18.96%, surpassing the results of CoT prompting by an absolute average improvement of 3.86% and the highest improvement of 19.46%. This study not only underscores the potential of LLMs in advancing programmatic text applications but also sets the stage for future innovations in automated text processing and complex task completion.
Fengjie Wang, Xuye Liu, et al.
CHI 2023
Rangeet Pan, Myeongsoo Kim, et al.
ICSE 2025
Jiaqin Yuan, Michele Merler, et al.
ACL 2023
Prashanth Vijayaraghavan, Apoorva Nitsure, et al.
DAC 2025