Understanding Industry Practitioners' Experiences in Generative AI Governance
Abstract
AI governance has become critical, especially as generative AI technology introduces new complexities and uncertainties that require robust risk management. While the need for frameworks and solutions to support AI governance is widely recognized, understanding and addressing the real-world needs of AI practitioners in operationalizing governance remains underexplored. To bridge this gap, we conducted semi-structured interviews using a design probe with AI governance practitioners across various industry sectors. Our findings provide insights into the experiences and pain points of industry practitioners in AI governance, highlighting key challenges in achieving performance goals, assessing societal impact, securing user data, and navigating technical difficulties. We also identified their technical and explainability needs, including practical guidance on addressing violations, as well as more detailed explanations of AI models, data, and evaluation. We discuss design guidelines for AI governance tools that effectively support practitioners' needs.