Uncovering and Quantifying Social Biases in Code Generation
Yan Liu, Xiaokang Chen, et al.
NeurIPS 2023
Developing domain models is one of the few remaining places that require manual human labor in AI planning. Thus, in order to make planning more accessible, it is desirable to automate the process of domain model generation. To this end, we investigate if large language models (LLMs) can be used to generate planning domain models from simple textual descriptions. Specifically, we introduce a framework for automated evaluation of LLM-generated domains by comparing the sets of plans for domain instances. Finally, we perform an empirical analysis of 7 large language models, including coding and chat models across 9 different planning domains, and under three classes of natural language domain descriptions. Our results indicate that LLMs, particularly those with high parameter counts, exhibit a moderate level of proficiency in generating correct planning domains from natural language descriptions. Our code is available at https://github.com/IBM/NL2PDDL.
Yan Liu, Xiaokang Chen, et al.
NeurIPS 2023
Buse Korkmaz, Rahul Nair, et al.
AAAI 2025
Byungchul Tak, Shu Tao, et al.
IC2E 2016
Vidushi Sharma, Andy Tek, et al.
NeurIPS 2025