The term “explainability” is a multifaceted concept within the realm of computer science. In essence, it encompasses various aspects and capacities of a system to effectively convey its internal processes, decision-making, capabilities, and constraints to its users. Providing explanations can notably enhance initial trust, especially when trust is measured as a multidimensional concept that includes aspects such as competence, benevolence, integrity, intention to return, and perceived transparency.
Tag: Explainability
TTS Vol 4 Issue 1 Special Issue on Designing Ethical AI Using A Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness
By Miriam Cunningham on April 7th, 2023 in Artificial Intelligence (AI), Social Implications of Technology, Transactions
Access Volume 4, Issue 1, 2023 – Special Issue on Designing Ethical AI Using A Human-Centered Approach: Explainability and Accuracy… Read More