Ilias Iliadis
International Journal On Advances In Networks And Services
Data-driven policy making is considered one of the most important aspects of decision-making systems and, as such, a lot of research is being carried out to provide the right tools and techniques to support it and make it more efficient. Towards this direction, the utilization of Artificial Intelligence (AI) approaches is widely being adopted to enhance current policies, create new ones and provide more accurate results. However, the way that those results are generated is often considered a black box, since the AI models are complicated, and the policy makers are not able to understand the reasoning behind the extracted results. To this context, Explainable AI (XAI) models have made their way into decision-making systems to address the aforementioned challenge. XAI, as its name suggests, provides the explanations with regards to the way that an AI model generates its results, thus enabling the end users to better understand its output. Nevertheless, XAI is not able to ensure the trustworthiness of an AI model’s output on its own, since the provided explanations, as well as the output of an AI model could potentially be malformed by a third party. To this end, in this paper the authors propose an approach for data-driven policy making that combines XAI with blockchain technology in order to not only provide explanations for the output of an AI model, but also ensure this output, and the corresponding explanations, are reliable.
Ilias Iliadis
International Journal On Advances In Networks And Services
Jinghan Huang, Jiaqi Lou, et al.
ISCA 2024
Alessandro Pomponio
Kubecon + CloudNativeCon NA 2025
Olivier Tardieu, Abhishek Malvankar
K8SAIHPCDAY 2023