Causally Reliable Concept Bottleneck Models
Giovanni De Felice, Arianna Casanova Flores, et al.
NeurIPS 2025
Training data attribution (TDA) is concerned with understanding model behavior in terms of the training data. This paper draws attention to the common setting where one has access only to the final trained model, and not the training algorithm or intermediate information from training. To serve as a gold standard for TDA in this "final-model-only" setting, we propose further training, with appropriate adjustment and averaging, to measure the sensitivity of the given model to training instances. We then unify existing gradient-based methods for TDA by showing that they all approximate the further training gold standard in different ways. We investigate empirically the quality of these gradient-based approximations to further training, for tabular, image, and text datasets and models. We find that the approximation quality of first-order methods is sometimes high but decays with the amount of further training. In contrast, the approximations given by influence function methods are more stable but surprisingly lower in quality.
Giovanni De Felice, Arianna Casanova Flores, et al.
NeurIPS 2025
Sarath Swaminathan, Nathaniel Park, et al.
NeurIPS 2025
Ramon Nartallo-kaluarachchi, Robert Manson Sawko, et al.
NeurIPS 2025
Xavier Gonzalez, Leo Kozachkov, et al.
NeurIPS 2025