Human Centricity in the Relationship Between Explainability and Trust in AI

By on March 20th, 2024 in Articles, Artificial Intelligence (AI), Editorial & Opinion, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

Zahra Atf and Peter R. Lewis

Artificial Intelligence (AI) is now applied in various contexts, from casual uses like entertainment and smart homes to critical decisions such as determining medical priorities, drug recommendations, humanitarian aid planning, satellite schedules, privacy, and detecting malicious software. There has been significant research into the societal impacts of algorithmic decision-making. For instance, studies on consumer preferences in user-centered explainable AI (XAI) found that AI is becoming an integral part of our daily experiences, with its influence expected to surge [1]. Researchers have also shed light on racial prejudices in algorithm-based bail decisions, probed the possibility of biases in AI-driven recruitment systems, and detected gender bias in online ads [2].

According to the European Commission’s Ethical Guidelines, AI technologies should enhance human capabilities, enabling them to make enlightened choices and uphold their basic rights. In fact, the European Union data protection law encompasses a provision for a right to explanation [3].

The term “explainability” is a multifaceted concept within the realm of computer science. In essence, it encompasses various aspects and capacities of a system to effectively convey its internal processes, decision-making, capabilities, and constraints to its users. Providing explanations can notably enhance initial trust, especially when trust is measured as a multidimensional concept that includes aspects such as competence, benevolence, integrity, intention to return, and perceived transparency [4].

AI technologies should enhance human capabilities, enabling humans to make enlightened choices and uphold basic rights.

A black-box model can either be 1) a function too intricate for anyone to understand or 2) a proprietary function. For example, deep-learning models often fall under the first category due to their complex nature. In its current common usage, “explanation” means a distinct model designed to mirror the majority of a black box’s actions (like when the black box determines that those with past credit issues are likely to default on upcoming loans). Here, “explanation” pertains to comprehending a model’s mechanism rather than clarifying the workings of the world [5].

AI systems are inherently intertwined with human–AI combinations, and as such, it often makes little sense to analyze AI systems without considering their human context. Many significant AI systems are deeply woven into socio-organizational frameworks where groups of humans interact with them, extending beyond a simple one-on-one human–AI interaction pattern [6]. Human-centered AI (HCI) represents an approach aiming for ethical AI that serves the common good. It emphasizes placing individuals and their needs at the core of any AI solution and takes into account the broader sociocultural context in which these solutions are deployed [2].

Central to XAI is the consideration of who the explanations are intended for, as this determines the most effective way to clarify the reasoning behind decisions. Despite the focus on algorithmic details in current discussions, the human aspect of making AI systems explainable is often overlooked [5]. In addition, simply making algorithms transparent, while avoiding some of the pitfalls of post-hoc explanation models, does not fully address the broader, socio-technical aspects of AI explainability. Explainability is an intricately interconnected set of issues that involves multiple stakeholders and extends across the entire lifecycle of AI systems. Misunderstandings about “explanations” often arise from imprecise terminology. An explanatory model might not reflect the actual computations of the original model, even if their predictions align. Thus, even accurate explanations can be misleading or lack detail about the underlying processes [6].

In the literature, the terms “explainability” and “interpretability” are frequently used synonymously [2]. An important distinction between interpretability and explanation is that interpretability is about analyzing the underlying reasoning of an AI system while explainability is about producing a description of what factors drove a decision.

Explanations can help prevent users from placing excessive trust in artificial agents, as suggested by Lockey et al. [7]. Some people are prone to having overly high expectations of technology, a tendency referred to as automation bias, or they may underestimate the risks associated with the actions of artificial agents. On the other hand, lack of trust in AI and skepticism toward technology are also widely observed attitudes [8].

In the context of HCI, the presence of explainability in automated systems is crucial to prevent mode confusion, alert fatigue, and the “cry-wolf effect” [9]. For instance, in AI systems, especially those designed for alerting users or for anomaly detection, if the system frequently produces false positives (incorrectly identifying a normal event as an anomaly or threat), users may start ignoring its alerts altogether. This can be dangerous in critical systems, like intrusion detection or medical diagnostic systems because when a genuine threat or issue arises, it might be disregarded or missed due to the cry-wolf effect [10].

Trust involves willingly placing oneself in a vulnerable position, with outcomes hinging on another’s actions [11], [12], [13]. It emerges from a decision-making process based on various pieces of evidence. While situations may compel reliance on external entities, genuine trust cannot be imposed on individuals without manipulation or deceit. From an ethical perspective, the most appropriate approach is to provide individuals with reasons to place their trust. Trust in future AI systems relies on principles like explainability, fairness, and transparency. However, there is a tension between XAI and genuine trust [14]. While explainability urges decision-makers to critically assess AI explanations, full trust implies not constantly doubting AI. This creates a dilemma as both cannot coexist without conflict [15].

Building on this foundational understanding, previous research indicates that sometimes explanations from AI do not always bolster trust [16]. This leads us to question the underlying reasons for such discrepancies.

This article examines the influence of human attitudes on the relationship between explainability and trust, raising the following questions: Could it be because people already harbor opposing beliefs, or perhaps the explanations are overly technical and thus difficult to comprehend? Alternatively, is there a perception that these explanations appear deceptive, as though they are attempting to manipulate or persuade the audience? Or, in a more overarching sense, might it be that people inherently distrust AI explanations, regardless of their content or presentation? The answers to these questions are pivotal in shaping our approach to building and conveying AI systems of the future.

Explainability in AI

Humans are increasingly interacting with autonomous and semiautonomous systems. This trend presents two primary challenges: 1) machine ethics and 2) machine explainability. While machine ethics focuses on setting behavioral guidelines to ensure systems act in morally acceptable ways, machine explainability ensures that these systems can clarify their actions and rationalize their choices in a manner that is comprehensible and trustworthy for human users [6].

According to various surveys on the current state of XAI, a principled framework that aligns with the historical literature on explainability in science is still lacking. Recently, Kim et al. [17] identified four essential components for the development of a transparent XAI framework. These components include: 1) the necessity for a clear and explicit representation of explanation knowledge, 2) the provision of alternative explanations, 3) the adaptation of explanations based on the understanding and knowledge of the person receiving the explanation (the explainee), and 4) the utilization of interactive explanations as a beneficial strategy [17].

According to various surveys on the current state of XAI, a principled framework that aligns with the historical literature on explainability in science is still lacking.

Explainability can be categorized into two levels for better comprehension [18].

  1. Model explainability by design: In the past half-century of machine learning (ML) research, numerous fully or partially explainable ML models have been devised. These models include linear regression, decision trees, k-nearest neighbors (KNNs), rule-based learners, generalized additive models (GAMs), and Bayesian models [12]. The development of interpretable models continues to be an actively pursued area in ML.
  2. Post-hoc model explainability: Despite the commendable explainability of the conventional models mentioned above, more sophisticated models like deep neural networks (DNNs) or gradient-boosting decision trees (GDBTs) have displayed superior performance in recent industrial AI systems [17]. However, explaining these complex models comprehensively remains challenging. Consequently, researchers have shifted their focus to post-hoc explanation. This approach involves analyzing a model’s input, intermediate results, and output to understand its behavior better. A representative category in this context utilizes explainable ML models to approximate the decision surface either globally or locally [10].

 

Present explainability methods might not always render clear explanations to users. To address this, human-in-the-loop techniques, integrating human feedback into AI, are gaining prominence to improve ML models [9].

It is often assumed that explanations in AI are essential for several reasons: they justify decisions, allow for better system control, and facilitate continuous improvement. In addition, they enable users to uncover and grasp the system’s decision-making logic. This, in turn, leads to increased trust and reliability [7].

Since different types of explanations can influence users differently, it is crucial to study how different explanatory methods affect the users’ views on fairness and their grasp of the system’s results [18]. Shulner et al. [10] investigated five textual explanation methods and found that while explanations can enhance users’ understanding of system decisions, their effectiveness varies by style. Moreover, the system’s result significantly affects users’ fairness perceptions.

There is a significant gap in the literature concerning who should provide explanations and guarantee that they are comprehended by the audience. According to a 2021 NIST report by Gerlach et al. [18], interfaces are expected to present these explanations, but there is no provision for guaranteeing that the explanations are understood by users. The report defines an explanation as “the evidence, support, or reasoning related to a system’s output or process.” The report’s discussion about the style of explanations breaks down one of the style elements, which is the degree of human–machine interaction, into three distinct categories [19].

  1. Declarative explanations: In this type, the system delivers an explanation without any further interaction required from the user.
  2. One-way interaction: In this scenario, the system’s explanation is generated based on a specific query or question that the user inputs into the system.
  3. Two-way interaction: This type of explanation resembles a conversation between individuals, where the user can ask follow-up questions, and the machine can respond, ask for clarification, or suggest new topics for exploration.

 

Explainability in AI enables humans to grasp the internal logic of these systems and the reasoning for their results. This is either inherently met by a specific group of algorithms, known as “interpretable models,” or achieved through methods that approximate the AI on a global or local outcome basis [17].

A hurdle in post-hoc explainability is that the model used for explanation provides a glimpse into the intricate model’s logic, yet this view is merely an estimate, not a precise portrayal of the initial decision-making mechanism. The goal is to render AI choices both defensible and clear, thereby building trust. Various methods of explanation influence how users comprehend and perceive them. In addition, the level of interaction between humans and machines in providing explanations differs. In this article, we use the term “interpretability” to refer to the inherent nature of a model to be understood, while “explainability” specifically refers to post-hoc explanations that aim to shed light on a model’s decision-making process.

Relationship Between Trust and Explainability

AI systems should be both 1) explainable for humans and 2) inspire user trust [9] where this is justified. Research in XAI reveals that coupling AI recommendations with explanations can foster proper trust calibration. Providing insights into the AI’s reasoning process aids users in understanding its logic, distinguishing accurate from erroneous suggestions, and minimizing misconceptions related to trust [18].

Donoso-Guzmán et al. [20] focus on developing user-experience-centric evaluation benchmarks for XAI. They propose the application of the user-centric evaluation framework, originally utilized in recommender systems, to XAI. This involves incorporating elements of explanations, outlining their characteristics, and identifying metrics for their evaluation. The goal of this elaborate framework is to establish standardized procedures for evaluating XAI from a human-experience perspective [20].

Explainability, the capacity of a system to clarify how it produces its outputs, is often considered a valuable tool for enhancing stakeholder trust. The rationale is that if the system’s explanations align with our expectations of a sound decision-making process, we have a basis for trusting the system. As such, requiring explainability seems, at first glance, to be a more actionable and practical aim than requiring trust directly. The assumed ability of explainability to foster trust is a key reason why it has become a prominent topic in computer science and interdisciplinary research. The extensive linkage between explainability and trust in academic literature is often based on an underlying assumption that explainability effectively facilitates trust in stakeholders [15].

Human-in-the-loop techniques, integrating human feedback into AI, are gaining prominence to improve ML models.

The connection to trust serves as a vital link between the characteristics of algorithms, users’ experiences with these algorithms, and their interactions with AI. Specific features of an algorithm act as signals that guide users in forming trust. This trust, in turn, enables users to engage with algorithms with a sense of usefulness and effectiveness [9].

The trustworthiness and credibility of ML models are enhanced when the model can clearly articulate its decisions. Although making deep-learning models explainable is a recognized challenge, ensuring that these explanations are easily understood by the intended stakeholders of the model presents an additional obstacle [10].

In a 2020 study, Shin et al. [21] conducted an online experiment to explore how people perceive features of algorithms, such as fairness, accountability, transparency, and explainability and how these perceptions relate to user trust. The findings suggest that users undergo a dual-process evaluation, involving heuristic and systematic assessments when forming trust in AI systems. Both processes are positively associated with trust, and systematic processes are additionally linked to performance expectancy and user emotions [21].

Mollel et al. [22] suggested that the explainability of a model provides a more comprehensive understanding of the significance of various features, both in a general sense and for individual prediction instances. This enhanced understanding can be utilized not only to boost the performance of the model, but also to increase its trustworthiness [22].

Some researchers have argued that explainability fosters trust in AI if and only if it contributes to a specific form of trust in reliance relationships with AI. The specific form of trust discussed revolves around the notion of “justified and warranted paradigmatic trust in AI. ” This type of trust is defined by a rational conviction in the reliability of AI, which subsequently leads to a dependence on the AI without the need for ongoing oversight [18].

In an AI-centric era, it is vital to prioritize human–AI interactions, especially concerning system transparency and trust. Current explanatory designs may overlook user preferences and social norms. Despite advanced interfaces, individuals might still favor human interaction, possibly viewing it as a basic human right [17].

Despite the complex nature of trust, as highlighted in various studies, the majority of existing research in XAI has primarily concentrated on examining the impact of overall trust [22]. Based on the insights from Lewis and Marsh [14], many discussions around “trusting AI” shift from a mere suggestion to a directive. Instead of genuinely asking individuals to trust AI, there is a push to make them do so. Dwyer labels this as “trust enforcement” as opposed to “trust empowerment” [14].

Trust enforcement aims to instill trust by presenting selective data, potentially portraying a certain image. In contrast, trust empowerment offers all the necessary information, allowing individuals to determine their trust level. This distinction is crucial, considering the personal and situational nature of trust. According to the assessments we have conducted, many articles only address a single aspect of trust (e.g., [18] and [19]) or limit their evaluation of trust to a few specific questions (e.g., [21] and [22]).

Toward Theory-of-Mind-Based Interactive Explanations

Embedded within the realm of XAI lies a fundamental inquiry: “for whom should the explanation be understandable?” The intricacies of developing and assessing opaque AI systems are significantly influenced by the specific individuals involved. Deciphering this aspect is pivotal, as it determines the precise explanation prerequisites for a particular issue. Furthermore, it shapes the methodology of data gathering, the permissible data collection, and the optimal approach to elucidating the rationale behind a given action [25].

As AI adoption accelerates, doubts about its reliability grow. While AI offers great potential for improving business operations, its success depends on people’s justified trust in its accuracy and trustworthiness [26].

Recent research has focused on examining human behavior when people engage with, interpret, and utilize explanations provided by AI systems, as indicated by several studies [24].

Explainability, the capacity of a system to clarify how it produces its outputs, is often considered a valuable tool for enhancing stakeholder trust.

The intended audience is pivotal when theorizing the significance of XAI [25]. Papagni et al. [23] recommended that, at the start of an explanatory interaction, explainees should be regarded as “novices.” This means that artificial agents participating in the interaction should not assume what kind of mental model users may already have about the agents. Instead, through explanations, users should be assisted in forming an initial mental understanding of the artificial agents. As the interaction continues, the artificial agents can begin to make inferences about what the users know. Consequently, “initial” explanations should mainly include information about the purpose of an artificial agent within the specific context of that interaction. This also has important implications for the ability of AI agents to possess the ability to reason using theory of mind.

On the Insufficiency of Explanations

Despite the above, in results that may at first appear counterintuitive, numerous research studies have now demonstrated that explanations provided by a system either have no impact on users’ trust in that system, or they can actually decrease the level of trust users place in the system (e.g., [25] and [26]).

Accepting explanations necessitates the establishment of understanding, which is inherently dynamic and iterative in nature. If the process of achieving understanding is left solely to those seeking it, the resulting understanding will be subjective. However, the aim of providing explanations for AI decisions is not to leave individuals alone in their quest for understanding. Relying solely on interfaces to facilitate understanding does not align with this aim. Effective communication, involving both statements and observations between those seeking understanding and those imparting it, is necessary to ensure that understanding is achieved and, consequently, that explanations are accepted.

Additionally, empirical psychology, such as the fuzzy-trace theory, has long demonstrated that humans can use shortcuts, or “gists,” to arrive at understanding—a process that is currently challenging for XAI to replicate [18].

A 2020 study by Baird and Schuller [19] identified eight dimensions of AI explainability, including cultural values, corporate values, and domain aspects, among others. These dimensions are generally better understood by a knowledgeable local professional than by an AI system. The concept of “localness” is thought to be effective in establishing genuine trust and promoting the resilience of information [19].

There is ongoing debate among AI researchers regarding the type of explanations that an algorithm could or should generate, and this may vary depending on the skills and competence of the user [24], as highlighted in previous works [25].

Recognizing the extensive social elements encompassing the technical system holds comparable significance to embracing explanation technologies as the technology itself. Developing solutions for the intricacies of socio-technical dynamics demands a comprehensive comprehension of the intricate human experiences occurring during interactions between humans and machines [26].

However, determining the approach to tackle this involves considering methods to thoughtfully address the methodological and conceptual hurdles through critical reflection.

To make progress, we propose four research hypotheses, which we will examine in turn, using insights from related literature:

Could it be That an Explanation Does Not Increase Trust Because People Already Have Contrasting Views?

Numerous studies have delved into what humans find comprehensible. For instance, research has shown that individuals tend to favor explanations that strike a balance between simplicity and high likelihood. In addition, people often simplify their grasp of intricate systems by overlooking scenarios with low probabilities [26]. Lage et al. [28] conducted a study assessing the efficacy of various explanation types by gauging human reaction times. Their findings indicated that simulations were the quickest, succeeded by verifications, with counterfactual reasoning trailing last. Moreover, counterfactual reasoning also registered the least accuracy [28].

The utility of explanations can vary widely among different stakeholders, depending on various factors such as the complexity, completeness, interactivity, and format (e.g., visual or textual) of the explanations. Research has been conducted with diverse stakeholder groups to explore how they utilize explanations in specific contexts or their routine activities. Besides highlighting the potential value of explanations, these studies have also uncovered that stakeholders tend to use a limited range of explanations in practice. They also identified various biases, such as misinterpretations, confirmation bias, and an uncritical acceptance of presented data, that can undermine the trustworthy use of these explanations.

Therefore, a significant challenge remains: crafting explanations that are effectively usable by stakeholders with different backgrounds, while also gaining a deeper understanding of their specific needs and designing new explanations that are tailored to meet these needs [29].

End users’ XAI needs differ based on their domain or AI background and level of interest. While there is a general curiosity about the details of AI systems, those with a strong AI background have greater XAI requirements. Nevertheless, all participants emphasized the need for practically valuable information to enhance their collaboration with AI [30].

The authors contend that the attributes of contestability (enabling individual empowerment) and explainability (promoting openness) run counter to the effectiveness of AI [27].

A potential solution to create more user-friendly and satisfactory explanations lies in merging data-driven with knowledge-based systems, meaning using established knowledge alongside insights drawn from data [19].

The authors contend that the attributes of contestability (enabling individual empowerment) and explainability (promoting openness) run counter to the effectiveness of AI.

Naiseh et al. [32] explored how to improve trust in AI through explanation interactions. They suggested five design principles, including promoting user engagement and introducing intentional friction. However, many users ignored the explanations due to disinterest. Those who did interact often used intuitive shortcuts, which might reduce XAI’s impact on trust. When users contributed to the design, they preferred structured explanations, aligning with the persuasive systems design model [32]. The elaboration likelihood model helps us understand this, highlighting two cognitive paths: a quick, automatic one and a slower, thoughtful one. People usually choose the fast path for daily decisions, potentially hindering XAI’s role in trust adjustment. For XAI to truly affect trust, users need to engage deeply with the explanations, influenced by individual traits and interests [33].

So, people tend to simplify complex systems, preferring explanations that are straightforward yet probable. While various types of explanations exist, their effectiveness varies among stakeholders due to factors like format and inherent biases. Current AI explainability approaches may not align with all users, especially specialized groups like clinicians. Integrating known knowledge with data insights might enhance explainability. For AI to gain justified trust, explanations should not only be clear, but also resonate with the users’ beliefs and needs, considering individual traits and backgrounds.

Could it be That an Explanation Does Not Increase Trust Because the Explanations are Too Technical for People to Understand?

As XAI explanations aim to offer technical transparency, researchers should shift their focus toward furnishing practical feedback that facilitates meaningful interactions between end-users and AI systems, allowing for more enriched engagement [6].

Since the 1980s, there has been a significant change in the way we view human–automation interaction. It is now widely recognized that successful knowledge acquisition and engineering involve more than just extracting knowledge. The focus has shifted toward considering systems engineering and enabling intelligent inferences based on various sources, including the internet. As a result, modern knowledge acquisition research has adopted a broad and multidisciplinary approach [10].

Although many topics related to the interaction with automated systems apply to both professional and nonprofessional users, there are certain additional factors to consider when studying how non-professional users interact with such systems. Unlike professional users, nonprofessionals may not have extensive training and experience with the technology, and they may use it in a broader range of contexts that go beyond the scope of their profession [30].

Furthermore, when nonprofessional users engage with autonomous systems, it becomes essential to delve into ethical aspects, including human attitudes, acceptance of these systems, and addressing security and hacking concerns [27].

The increasing utilization of automated systems by nonprofessional users raises significant questions about human–automation interaction. For instance, how will users be trained to operate safety-critical automated devices? How can their skills be maintained if they do not frequently use the technology? How will different cultures, norms, customs, and conventions be accommodated in this context? Additionally, what impact will the adoption and use of automated systems have on various user groups?

Ehsan and Riedl [33] introduced the notion of “explainability pitfalls,” where the explanations given by AI systems may inadvertently lead users to rely too heavily on AI decisions, sidelining their own judgment. Such pitfalls might disproportionately affect users with limited technical knowledge and familiarity with AI technologies [33]. This has been exemplified in previous research concerning a proposed AI system designed to diagnose pneumonia. In scenarios where an incorrect diagnosis from this AI system is accompanied by an explanation, a community health worker with minimal AI knowledge might be prone to accept this decision, which could be a result of misplaced trust and an overestimation of the AI’s capabilities [17].

In the study conducted by Kim et al. [30], it was discovered that participants expressed a preference for actionable and valuable information that enhances their collaboration with the AI system, rather than intricate technical specifics. Correspondingly, participants indicated their intention to utilize explanations from XAI for a range of purposes beyond comprehending the AI’s outcomes. These purposes included establishing trust, enhancing their task performance, adjusting their behavior to provide better inputs to the AI, and providing constructive input to developers. Moreover, among the available XAI approaches, participants favored explanations structured around components, resembling human reasoning and explanations [30].

Ghai et al. [31] found that explanations were beneficial for users with extensive task knowledge, but had a negative impact on those with limited task knowledge. Specifically, less knowledgeable users were more likely to accept the model’s output, even when it made incorrect predictions. Despite this, the study showed that explanations played a significant role in adjusting user trust and assessing the development stage of the model. The study ultimately concluded that achieving high transparency does not always lead to enhanced user understanding [31].

AI explanations, intended for transparency, can sometimes be too intricate for everyday users. The history of human–machine interaction underscores the importance of user-friendly interfaces, especially as more nonexperts interact with advanced systems. Ultimately, sheer transparency in AI explanations does not always equate to clarity for all users and can sometimes lead to misconceptions among those less familiar with the technology.

Could it be That an Explanation does not Increase Trust Because People Find Explanations Deceptive as if They are Trying to Manipulate or Persuade Them?

At times, some individuals recognize that AI, like human writers, may struggle with producing accurate summaries, acknowledging that the technology is not infallible [34].

In certain research, users showed a tendency to give a lower rating to the AI-powered app when they were provided with supplementary visual explanations because they believed it was information intended to be manipulated [35].

Lima et al. [2] expressed concerns about the growing emphasis on explainability in algorithmic decisions, suggesting it might interfere with accountability. They highlighted how XAI systems might incorrectly redirect perceived control from the creators of algorithms to the end-users, such as patients [2]. Building upon previous research that questioned designers’ readiness to accept accountability, they illustrated the risk of algorithms and end-users becoming the primary bearers of blame, overshadowing the responsibility of the designers [34]. To strike a balance between explainability and accountability in algorithmic decision-making, Lima et al. [2] advocated for an increased focus on the proactive responsibilities of designers. They recommended integrating explainability into existing accountability structures throughout AI’s lifecycle, stressing that while XAI is crucial, it should not be seen as a solution to every challenge [2].

Some users view AI explanations with skepticism, fearing they might be deceptive or manipulative. For instance, additional visual explanations in an AI app led some users to believe the information was being manipulated. The overarching sentiment is that while explainability is vital, it should not overshadow accountability or be perceived as a panacea for AI challenges.

Could it be that an Explanation Does Not Increase Trust Because People Never Trust The Explanations, Regardless of What is Being Said?

Recent research has revealed that, on average, users tend to either overly trust or mistrust AI recommendations, signaling that XAI is falling short in its role of helping users calibrate their trust effectively [36]. This shortcoming in trust calibration is attributed to the assumption underlying XAI—that users will actively engage with the provided explanations and interpret them without bias [34].

Achieving high transparency does not always lead to enhanced user understanding.

Trust among healthcare professionals offer insights into AI decision-making and address biases has been challenged. Ghassemi et al. [36] contend that current explainability methods are unlikely to fulfill these expectations when it comes to patient-level decision support and it is “a false hope,” because many people consider AI to be unreliable [37].

According to Ghassemi et al. [36], if the goal is to guarantee the safe and reliable operation of AI systems, the emphasis should be on implementing rigorous and comprehensive validation procedures [36].

Technical literacy spans a broad spectrum and plays a pivotal role in determining the uptake of technology, encompassing the adoption of XAI systems and tools. Human-centered XAI research holds considerable promise for devising methods tailored to elucidate AI decisions to those with limited technical acumen. Moreover, it is crucial for these methods to be computationally feasible in resource-constrained areas [37].

Folk psychology lets us attribute mental states to others, helping us predict their behaviors. Thus, addressing trust issues might require focusing on individuals’ subconscious thoughts.

Although reference to HCI research is not overtly present in much XAI work, the aspect of making AI understandable is inherently intertwined with HCI just as much as it is with AI itself. In fact, it could be argued that the human element is even more central in this equation. However, the human dimension is frequently overshadowed by the technical discussions surrounding XAI [40].

To make AI systems more accessible to nontechnical users, researchers have investigated various approaches, including interactive demonstrations, visualizations, and storytelling [38]. For example, Abbu et al. [25] emphasized the significance of visual analytics and storytelling in effective digital leadership, especially in the context of AI implementation [39]. Visual analytics facilitate exploration and contextual understanding, questioning of assumptions, comparison, and assessment of interventions, and most importantly, scaling AI explainability. Storytelling, on the other hand, can promote a shift in perspective; it uses data stories that are explicitly designed to elucidate the underlying reasons for observed trends or patterns, helping people understand the rationale behind the facts [23].

In addition, gamification and games with a purpose have been demonstrated to be highly effective tools for evaluating how individuals interpret explanations offered by XAI systems [39].

Recent scholarly works indicate that the explainability of a system is tightly connected to its perceived trustworthiness. Similar to safety and security features, explainability should be integrated into a system during its design phase, rather than being appended after the system has been developed [40].

To build and maintain well-placed trust in artificial agents, it is essential to provide explanations for their actions. Yet, there is a lack of clarity and consensus on what constitutes a valid algorithmic explanation, often leading to generic or mismatched explanations. These issues can erode trust and satisfaction. Explanations are shaped by their social context and are dynamic, changing based on the parties involved. Our observation would lead to the inquiry: Are human attributes such as prejudice, bias, comprehension ability, and intuition fundamental limitations to the power of explanation in building trust? Given these human characteristics, can trust be enhanced solely through the design of better explanations?

Author Information

Zahra Atf is a visiting researcher at the Trustworthy Artificial Intelligence Laboratory, Ontario Tech University, Oshawa, ON L1G 0CS, Canada. Atf has a PhD in business with a specialization in marketing and a master’s in information systems management.

Peter R. Lewis is an associate professor at Ontario Tech University, Oshawa, ON L1G 0CS, Canada, and holds a Canada Research Chair in Trustworthy Artificial Intelligence. He leads the
Trustworthy AI Laboratory in the Faculty of Business and Information Technology. His research focuses on reflective and socially intelligent systems. Interested in the intersection of AI and society, his work aims to develop reliable AI systems. He serves as an Associate Editor for IEEE Technology and Society Magazine. Lewis has a PhD and an MSc in computer science. He is a Member of IEEE. Email: peter.lewis@ontariotechu.ca.

 

To read the full version of this article, including references, click HERE.