Control Flow Operators in PyTorch
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
The transparency of trained machine learning models is one of the important factors. Recently, neuro-symbolic methods have been proposed to increase transparency. Although these approaches can show trained rules in neuro-symbolic neural networks, it is difficult to understand the reasons for the AI's decision, and a human operator cannot edit the trained knowledge due to the lack of a graphical visualizer and an easy-to-use interface. We propose essential components for an explainability demonstration inspired by psychological studies. We then develop a graphical interface for neuro-symbolic reinforcement learning. We conduct experiments for editing networks to understand the advantage of the proposed interface and a questionnaire survey for our developed system.
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Ben Fei, Jinbai Liu
IEEE Transactions on Neural Networks
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010