Equity can be defined as a state, quality, or ideal of being just, impartial, and fair [1]. The World Health Organization (WHO) defines equity as “the absence of avoidable or remediable differences among groups of people, whether those groups are defined socially, economically, demographically, or geographically.” When benchmarking those who operationalize equity well, five Ps are often used as an overarching equity lens [2]. AI4Eq was our purpose in this issue; our community addressed the remaining Ps very well.
AI4Eq is about people. Our community explored the risks and rewards of artificial intelligence (AI) for diverse populations who vary geographically, socially, and demographically. We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.
AI4Eq is about place. Our experts recognized how humans are differently situated in terms of the equity barriers they experience. We reproved epidemiological models that cannot be adapted to varied cultural contexts. We questioned the legitimacy of using AI-driven facial recognition policy within particular social settings.
AI4Eq is about processes/procedures. Collectively, our experts alerted us to possible systemic, structural, and institutional inequities throughout AI design, development, and deployment (AI-DDD). The authors proposed frameworks that create and guard robust participatory processes, thereby avoiding exclusions of, and meaningfully including, those who are impacted by the AI systems.
AI4Eq is about power. Our peers implored us to adopt existing human rights approaches to shift power dynamics away from the few and toward “equal availability” of the benefits of AI. We acknowledged skewed decision-making due to unrepresented data, and, on the contrary, unbiased AI-decision-making resulting in a fair distribution of such public resources as energy and water. Our authors proposed accountability-infused counterbalances to power abuses such as global governance of algorithmic systems, exposing equity-tokenism, and assigning direct responsibility to people in each phase of AI-DDD, from designers who would embrace “equity-by-design” to administrators who manage programs in which AI is utilized.
We commit to equity as a leading principle. We guard against equity as an afterthought. We recognize that equity will be a point of tension [3]. Advancing equity in AI will be a journey. We ready ourselves for equity-perpetuity with not only technical challenges, but also mostly adaptive challenges as we champion AI4Eq.
Author Information
Christine Perakslis is an Associate Professor in the MBA Program, College of Management, Johnson & Wales University, Providence, RI, USA. Her e-mail address is christine.perakslis@jwu.edu.
To read the complete text of this article including references, click HERE.