The term “explainability” is a multifaceted concept within the realm of computer science. In essence, it encompasses various aspects and capacities of a system to effectively convey its internal processes, decision-making, capabilities, and constraints to its users. Providing explanations can notably enhance initial trust, especially when trust is measured as a multidimensional concept that includes aspects such as competence, benevolence, integrity, intention to return, and perceived transparency.
![](https://technologyandsociety.org/wp-content/uploads/iStock-1834267444-150x150.jpg)