Article Highlight | 9-Sep-2024

Toward Human-centered XAI in Practice: A survey

Beijing Zhongke Journal Publising Co. Ltd.

With the rapidly growing attention in exploring artificial intelligence (AI) and neural networks, people have been applying AI techniques in various domains, such as medical diagnosis, financial loan application, autonomous vehicles, and judicial decisions. But the use of high-performance AI with ambiguous and complex end-to-end structures, also called “black-box” problem, cannot elucidate the basic principles behind model predictions. This evolves into the major barrier to people′s trust and AI adoption, especially in the case of using AI to make high-stake decisions.

 

Explainable AI (XAI) has been developed to enhance user trust, human-machine collaboration and reduce algorithm bias. The EU General Data Protection Regulation (GDPR) further declared a user′s “right to interpret”, emphasizing that users′ needs for explanations should be satisfied by AI systems. Various explainable methods are available today as a means of bridging the gap between AI models and human understanding, together with many explainable tools available for public use. From a technical perspective, many studies available today have widely summarized and explored XAI, and XAI methods have been categorized into different taxonomies and explored in various application domains.

 

However, prior researchers are more inclined to explore explainable methods from a technical-centric perspective in the early stage of XAI research, lacking sufficient concerns for human users′ diverse demands. This failure to satisfy the experience among most non-AI expert users greatly hampers the practical use of XAI . Humans who are related to AI systems, also called AI stakeholders, are far from being a unified group in implementing an XAI system. Their demands for explainability vary greatly depending on goals, backgrounds, and decision scenarios. An effective solution to this barrier is to make the technical design and selection of XAI in application largely driven by stakeholders′ needs, thereby leading XAI research and applications in a more “human centered” manner.

 

With this realization, much progress has been made in human-centered XAI. An increasing number of researchers concentrate on human-centered XAI and human demands for explanations from the perspectives of multidisciplinary theories. Mainstream studies in this field manage to mine diverse user needs with different stakeholder typologies, employing a more user-centered approach focusing on users′ requirements, but may possibly cause technically unachievable solutions or inoperability in practice limited by AI capacity or data availability.

 

With this concern, more work has begun to explore the gap between technical XAI approaches and user requirements to provide a guidance for the design of XAI methods in a human-centered and applicable manner. Vermeire et al. provide a methodology for guiding data scientists to select appropriate explanation methods for different stakeholder needs. With the aim of choosing better methods for stakeholders, Langer et al. further discuss in what way stakeholders′ demands for explanations will be satisfied. These recent studies have provided general theoretical guidance for bridging the gap between technical design and users′ nontechnical subjective demands but with a few concerns such as whether such theoretical guidance can be useful in some specific applications.

 

Although XAI methods for different application domains have been designed and human-centered demands have been explored, studies of their match in various application domains as well as XAI evaluations for humans with different demands are still limited. Most studies are still inclined to select stakeholders for AI practice in an experience-guided manner, failing to apply theoretical guidance of stakeholder typology and demands in human-centered XAI well. Studies focusing on user-friendly XAI interfaces and design patterns in specific applications tend to only analyze the demands of their most direct users, such as clinicians in clinical decision supporting systems and insurance clients in car insurance auditing. Dey et al. discuss how different XAI approaches satisfy the interests of limited stakeholders, only considering data scientists, clinical researchers and clinicians.

 

Most studies are structured in a dispersed manner either proposing a theoretical general guidance or discussing specific application experience. Thus, this study attempts to apply practice-guided comprehensive stakeholder typology to XAI demand analysis in specific application fields and provides XAI implementation with the guidance of satisfying various stakeholders′ needs with existing XAI methods or tools.

 

This study tries to answer the following research questions by surveying extant literature (with the whole research framework structured in Fig. 1) to fill these gaps and provide more application-oriented, human-centered XAI implementation insight.

 

RQ 1. Whom does human-centered XAI explain to and what are their demands? (Explain to whom?)

 

RQ 2. How can we evaluate the extent to which XAI satisfies diverse human demands? (How to evaluate?)

 

RQ 3. What explanations does existing XAI provide? (What explanations have been provided?)

 

RQ 4. How can appropriate XAI methods and tools be selected for stakeholders in specific practices? (How can explanations be selected in specific practice?)

 

Explain to whom? And how to evaluate? This study first reviews existing articles concerning AI stakeholders and their demands for XAI, finding that most studies categorize stakeholders by their AI knowledge backgrounds or their role interacting with AI systems. The knowledge-based category stresses users′ salient differences in comprehension and trust towards explanations, but roles-based framework, mainly identified from diverse practical scenarios involving AI ecosystems with distinct interactions and different objectives are more suitable for human demand analysis in applications.

 

This study summarizes an applicable role-based demand framework, combined with knowledge background discussions within each role, which enables practitioners to comprehensively consider all stakeholders in AI practice and provides further guidance of exploring more detailed real-world needs. Moreover, to ensure that the design of XAI system satisfies different stakeholders′ real needs, human-centered evaluation methods are needed. This study extracts stakeholders′ intended explanation goals from their demand framework to align with evaluation goals and then summarizes applicable measures for human-centered XAI evaluation from existing surveys and studies.

 

What explanations have been provided? This study analyzes different XAI methods in the field of visual computing. Visual computing is the entire field of acquiring, analyzing, and synthesizing visual data by means of computers that provide relevant-to-the-field tools, which consist of tasks, such as computer graphics, image processing, 3D, image representation, visualization and user interfaces. Considering that most current XAI methods stemmed from visual computing tasks, such as saliency maps, gradient-weighted class activation mapping (Grad-CAM) and testing with concept activation vectors (TCAV), and that visual computing is a critical and widely used field of AI in specific applications, this study focuses on a specific field of visual computing.

 

This study then explores the existing XAI methods and tools of this field in a systematic manner, by categorizing them into visual, knowledge-based, causal and example-based explanations based on their characteristics and explanation forms. This study offers clear and detailed instructions of technical XAI approaches and toolboxes for reference in practice by exploring the properties and understandability of different XAI methods.

 

How can explanations be selected in specific practice? After gaining comprehensive stakeholder “orders” and an existing methods “menu”, this study aims to instantiate the proposed stakeholder demands framework to give effective recommendations of suitable methods and tools for certain stakeholders and increase their satisfaction. Taking the typical XAI practice in medical image analysis as an example, this study first identifies stakeholders and clarifies their goals and demands for explanations by using the proposed human-centered demand framework in this practical field. On the basis of reviewing extant studies of explainable methods designed or used for stakeholders in clinical diagnosis scenarios, supporting examples from studies are used to give advice on appropriate methodology categories together with detailed implementation considerations. This study also takes a careful look at the XAI toolboxes, categorizing them with demands of stakeholders in a “human-centered” perspective and analyzes the pros and cons of each.

 

This article is divided into six sections. Section 2 introduces the comprehensive human-centered stakeholder demand framework for XAI practice in detail and forms a guideline to evaluate XAI for stakeholders with various needs. Section 3 presents a taxonomy of XAI methods in the literature of visual computing and summarizes the properties of existing methodologies with the taxonomy. Section 4 further applies the stakeholder demands framework in the clinical imaging diagnosis scenario and gives applicable recommendations of XAI method adoption to distinct stakeholders. Section 5 summarizes the current XAI tools and provides operative advice on the design of future tools for stakeholders based on their needs.

 

See the article:

Kong, X., Liu, S. & Zhu, L. Toward Human-centered XAI in Practice: A survey. Mach. Intell. Res. 21, 740770 (2024). https://doi.org/10.1007/s11633-022-1407-3

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.