The term “explainability” is a multifaceted concept within the realm of computer science. In essence, it encompasses various aspects and capacities of a system to effectively convey its internal processes, decision-making, capabilities, and constraints to its users. Providing explanations can notably enhance initial trust, especially when trust is measured as a multidimensional concept that includes aspects such as competence, benevolence, integrity, intention to return, and perceived transparency.
Author: Peter Lewis
Reimagining Digital Public Spaces and Artificial Intelligence for Deep Cooperation
By Peter Lewis on August 20th, 2023 in Articles, Artificial Intelligence (AI), Commentary, Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact
What role does and can AI play in us being able to enjoy security in our places and spaces? Perhaps we could design technology-enabled spaces for the purpose of strengthening the community and empowering community action.