Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Interpretable AI models make their inner workings visible to end-users providing justification for automated decisions. One class of such a model is a boolean decision rule set, i.e. an if (condition) then (outcome 1) else (outcome 2) statement, where the conditional clause is learnt from the data. This is a challenging to do as there are exponentially many clauses, and training samples may miss context (for edge cases). We present a practical mechanism whereby users provide feedback that is treated as constraints in an optimization problem. We show two applications - one where the underlying rule sets are known and one where they are not - where such user-input leads to more accurate rule sets.
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Megh Thakkar, Quentin Fournier, et al.
ACL 2024
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Shubhi Asthana, Pawan Chowdhary, et al.
INFORMS 2020