Artificial Intelligence for a Fair, Just, and Equitable World

By on April 1st, 2021 in Artificial Intelligence (AI), Editorial & Opinion, Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

The last few years have seen a large number of initiatives on artificial intelligence (AI) ethics: intergovernmental-institution initiatives such as “Ethics Guidelines for Trustworthy AI” from the high-level expert group on AI of the European Commission [1] or the Organisation for Economic Cooperation and Development (OECD) Council Recommendation on Artificial Intelligence [2], government initiatives such as that of the U.K. Parliament Select Committee on Artificial Intelligence [3], industry initiatives on AI ethical codes such as those of Google, IBM, Microsoft, and Intel, academic initiatives such as the Montreal declaration for the responsible development of AI [4], the Stanford University 100 Year Study on AI [5] or the Alan Turing Institute’s “Understanding Artificial Intelligence Ethics and Safety” [6], and finally professional body initiatives such as the IEEE Global Initiative on Ethics of Autonomous/Intelligent Systems (A/IS) [7]. These initiatives, while acknowledging the potential of A/IS technologies to contribute to global socioeconomic solutions, highlight the increasing challenges posed by these technologies in the ethical, moral, legal, humanitarian, and sociopolitical domains.

Issues addressed include decision-making transparency, privacy, data security, and big-data-based discrimination; wider concerns such as employment and economic impacts, exacerbating climate change, and effects on social well-being, human rights, and democracy; and finally also effects on physical and psychological health and on autonomy of action. In essence, the initiatives seek to identify potential benefits and risks of harm and then emit recommendations regarding principles to be followed by the different actors involved.

There is a need to bring order to the overabundance of A/IS ethical codes, guidelines, and frameworks.

This explosion of interest in the ethical aspects of A/IS, the debate now having also reached the general public, has resulted in the publication of more than 70 reports by large organizations in the last few years according to [8] and [9] points to the need to bring order to the resulting landscape of overabundance of ethical codes, guidelines, and frameworks, many of which suffer from deficiencies such as lack of scientific rigor, subjectiveness, incoherence, superficiality, and redundancy, which creates confusion. Although devoting significant resources to writing reports while the uncontrolled deployment of potentially harmful A/IS proceeds apace does have a certain “fiddling while Rome burns” quality about it, these reports do contain useful information. In [9], five ethical principles are distilled to govern the production and deployment of A/IS: beneficence (protecting welfare and human rights), nonmaleficence (avoiding causing physical or psychological harm, in particular regarding privacy and security, applying the precautionary principle to risks of harm), autonomy (ensuring decision-making and delegation of such are well informed and adequately controlled), justice (nondiscrimination, lack of bias, solidarity, and sharing of the benefits), and explicability (transparency, intelligibility, accountability of actions, and decision processes).

Many ethical frameworks or codes suffer from deficiencies, such as a lack of scientific rigor, subjectiveness, incoherence, superficiality, or redundancy, creating confusion.

On a philosophical level, the potential contribution of A/IS to “human flourishing” is often mentioned. Spiritual traditions, ethical schools, and psychologists agree that self-realization and personal happiness cannot be decoupled from the common good, and the common good is not possible without justice, so that A/IS for human flourishing must be fundamentally at the service of justice. Since a wide view of ethics focuses on potentialities, not only on risk mitigation, from such a view arises the ethical imperative to harness AI technologies to the benefit of humanity to improve quality-of-life for all rather than contributing to perpetuating systemic injustices. Apel [10] argues that globalization obligates a universal-ethics response to planetary challenges, in which technology can play a crucial role. However, the ethical debate weighing up the risks and dangers inherent to A/IS against their potential to contribute to the creation of an equitable and prosperous world is taking place mostly in high-income countries (HICs) so that much of it is of little relevance to the more than 700 million people living in extreme poverty. Reciprocally, ethical questions that greatly affect marginalized populations are not treated with the importance they deserve in this debate.

Universal-ethics considerations gave rise to the United Nations (UN) Agenda for Sustainable Development, which commits all member states to make concerted efforts toward building an inclusive, sustainable, prosperous, and resilient future for people and the planet, reified in the set of sustainable development goals (SDGs) to be reached by 2030. Eradicating poverty is a central objective of the SDGs and though the emphasis is on lower- and middle-income countries (LMICs), they also target the growing pockets of underdevelopment in HICs. There is a growing interest in the role that A/IS can play in achieving these objectives on the part of international organizations, such as UN Global Pulse (UNGP) [11], United Nations High Commissioner for Refugees (UNHCR) [12], the United Nations International Children’s Emergency Fund (UNICEF) Global Innovation Centre [13], the World Wide Web Foundation [14], and even the World Economic Forum [15]. Since 2017, The International Telecommunications Union has held the annual “AI for Good Global Summit” [16], which aims to accelerate progress toward the SDGs by connecting “problem owners” with AI innovators and promoting inclusive, trusted, and safe development of A/IS and equitable access to their benefits.

Realizing the potential A/IS holds of contributing to the achievement of the SDGs, in particular, those more related to equity, while avoiding the many pitfalls involved, requires research of a practical nature that goes beyond cataloging risks and potentialities. In this article, we argue that it requires the establishing or strengthening of a research field which we denominate “AI for equity” (AI4Eq) and its promotion not only among A/IS professionals and researchers but also among the general public. AI4Eq would need to explore in-depth the risks of A/IS that disproportionally affects marginalized communities since such risks are often neglected.

This article can be viewed as a call for action to participate and promote the AI4Eq field, in part as a counterweight to the heavily plugged corporate sector view on AI ethics, which is often little more than “ethicswash” (and apparent interest in the SDGs is often little more than bluewash) for a program in which the effect of AI/S development and deployment will most likely be to increase inequality [17], [18].

R&D in AI4Eq requires a coordinated, multidisciplinary effort of application-driven AI researchers, applied ethicists, IT-law professionals, and experts in technological innovation for development. It will also require collaboration with both the public and the private sector, international institutions, third sector organizations, and, very importantly, citizen participation.

Potential Benefit/Risk of Harm

Already in [19], pioneering expert system (ES) applications were shown to have great promise in LMICs for improving productivity, supporting policymaking and capacity building, and providing universal access to basic services. In [20], we illustrate the role of A/IS in advancing the SDGs in three crucial sectors: agriculture, water resource management (WRM), and health. Vinuesa et al. [21] present the general panorama regarding the use of A/IS in sectors that are crucial for the SDGs. As the sectors mentioned above, this includes the role of A/IS in the provision of energy, in particular renewable energy, in intelligent cities that use resources efficiently and in autonomous electric vehicles. However, generally speaking, judging from the academic literature, the potential of A/IS technologies in areas that are relevant for the SDGs has been insufficiently explored.

Among the technologies most applied in these areas, again according to the literature, are decision-support systems, geographic information systems (GISs), and control systems, many of the latter two types of application being data-science-based (here, we understand the following term inclusion: data science ⊃ data mining ⊃ big data). The moment in time when symbolic AI techniques were complemented and largely displaced by machine learning on big data has coincided with the moment when the potential usefulness and impact on society of A/IS has increased significantly, and when the ethical debate surrounding these technologies has acquired great importance.

Data-science-based GIS have applications in utility planning, weather and harvest forecasting, emergency planning, disease monitoring, and identification of areas of poverty. Data-science-based control systems have applications in precision agriculture and planning optimization, agricultural management, biodiversity, naturalizing smart cities through energy, and traffic control. Data science also has important applications in health (e.g., development of health policies, generation, and maintenance of health databases and epidemiology) and e-democracy and social policy (e.g., systems related to the provision of social protection services and welfare support). Noteworthy in this context is the UN Global Pulse [11] initiative intended to foster discovery, development, and scaled adoption of big-data innovation for sustainable development and humanitarian action. Global Pulse has highlighted important ethical concerns, for example, with both the collection and use of data during humanitarian emergencies. Kshetri [22] and Ryan et al. [23] discuss the important role that big data can play in development, mentioning the fields of health care, WRM, agriculture, education, environmental monitoring, biotechnology, conservation of natural resources, and protection against natural hazards.

Though the contribution of A/IS to productivity is indisputable, evidence regarding their contribution to equity and sustainability is much less clear. The purely economic or technocratic standpoint of only measuring the contribution to economic growth can have a deleterious effect on well-being if such growth comes at a cost of environmental damage and increased inequality. Measuring the success of A/IS projects in terms of the SDGs would oblige a more holistic perspective, thereby avoiding losing sight of the structural causes and roots of the problems being addressed, a criticism that [24] makes of the currently fashionable focus on “effectiveness” in international development institutions, this in part being inspired by the “poor economics” movement critiqued in [25] and the “effective altruism” movement critiqued in [26]. Vinuesa et al. [21] estimate that A/IS can contribute positively to 79% of the targets raised by the UN 2030 Agenda, but could also act as an inhibitor of the remaining 21.

However, the SDGs themselves have been criticized for being incoherent (see [27] and [28]), though the criticisms could be said to be a response more to the letter of the SDGs rather than to the spirit of the 2030 Agenda. Moreover, a certain amount of incoherence and even conflict between different objectives of the 2030 Agenda can be ascribed to the fact that they are the result of lengthy negotiations between different international actors on a complex and comprehensive program. For Spaiser et al. [27] sustainable development is an oxymoron if development is equated with economic growth since endless, unbridled economic growth is incompatible with sustainability. Though the synergy between objectives and subobjectives of the Agenda 2030 is well studied, Spaiser et al. [27] point to the existence of negative synergies, where an increase in some indicators is necessarily associated with a decrease in others. As is perhaps to be expected, this phenomenon is particularly marked between indicators concerned with economic growth and those related to equality and ecology. Given the small percentage of economic-growth generated wealth that trickles down to the most impoverished classes, to end extreme poverty in the current economic climate would likely require consuming not only the resources of our entire planet but also those of several other planet earths! The SDGs appear to skirt the issue that the current political and economic system and the legislative environment favors the concentration of wealth and the destruction of natural resources.

The perceived timidity of the SDGs in this regard can perhaps also be ascribed to the lengthy negotiations between different actors, since the spirit of the 2030 Agenda, as reflected in documents such as [29], is expressed as an “inescapable transformation,” that is, a profound change in systems and structures in which all organizations and individuals in our society must be involved. Moreover, unlike previous UN initiatives which were, in general, aimed at LICs with the rest of the world being assigned a supporting role, the approach of the 2030 Agenda to development problems is universal in nature. This is a recognition that the development paths followed by the HIC cannot be held up as examples to follow, given the resulting inequality and unsustainability, particularly relevant being their contribution to creating a climate emergency. This process of transformation necessarily gives a central role to innovation and, therefore, to technological innovation. AI can play a decisive role in the technological innovation that will enable the necessary transformation toward an inclusive and sustainable future, consistent with the United Nations Development Programme (UNDP) concept of “human development” [30], a more equitable concept of development which refers to human freedoms and capacities, not to increasing GDP.

In the following sections, we catalog the various reasons for which A/IS may negatively affect the SDGs related to equity both within countries and between different countries.

Inappropriateness for the Socioeconomic Conditions

The principal socioeconomic reasons that A/IS may contribute to increasing inequality are the unequal distribution of computational resources, infrastructure, and suitably trained human resources. For example, in a review of intelligent tutoring systems focusing on barriers to their use in LMICs, Nye [31] highlights a widespread lack of adequacy to the deployment context. He also introduces a note of skepticism regarding LMICs leapfrogging straight to the mobile era: though mobile-based applications are becoming more prevalent in LMICs, in general, this is mainly in privileged sectors; meanwhile, networks and devices are generally of low quality, and mobile data and Internet access remain costly.

References [11], [22], and [23] draw attention to the fact that the use of big data in LMICs suffers from the classic problems of poor infrastructure and lack of human capital. Regarding infrastructure, the first problem is the lack of available or accessible data. For example, the low number of meteorological stations for climate data collection and a lack of digitization of the data in question is a hindrance to using big data technology in agriculture and WRM, and to supporting climate-change response. Regarding human capital, in LMICs there is a shortage of expertise in the collection, preparation, and analysis of data.

Imbalance of Power in the Market

A/IS may give large corporations a competitive advantage over small producers, for example, as stated in [21], big data techniques may require powerful energy and computational infrastructure that is beyond the reach of small producers.

Similarly, as stated in [22], which discusses the risks involved in using big data in LMICs, the mere promotion of big data techniques in areas such as agriculture is likely to favor those with the economic means to put these techniques into practice. Enhancing information transparency in agriculture could also favor large-scale enterprises even to the point of facilitating predatory practices. Finally, since futures markets are increasingly based on big-data analysis, as stated in [21], enhancing information transparency facilitates commodity trading and could therefore also have a negative impact on small farmers by increasing price volatility or by accentuating price variability (e.g., due to investors taking short positions).

Adverse Effects on Employment

It is generally acknowledged that the deployment of A/IS will lead to lower salaries for unskilled labor and to difficulty in accessing the labor market for the most disadvantaged sectors of the population, this topic is extensively treated in the literature, for example [7]. In a liberalized market, automation is always to the advantage of capital and to the disadvantage of labor, this effect being visible, for example, in the new tech giants having relatively low numbers of personnel for their size.

Adverse Effects on Human Rights and Democracy

The recently released report to the UN General Assembly by the special rapporteur on extreme poverty and human rights about the digital welfare state [32] and different studies found in academic literature such as [32], [9], [18], and [22] focus attention on the risk of coercive, surveillance, discriminatory A/IS technologies that not only do not contribute to reducing inequalities but contribute to perpetuating them. The report warns of the “risk of stumbling zombie-like into a digital welfare dystopia” where “Big Tech has been a driver of growing inequality and has facilitated the creation of a vast digital underclass.” It provides many examples in different countries of how dehumanized intelligent technologies are creating barriers to accessing a range of social rights for those lacking internet access and digital skills (in particular, A/IS literacy). Intensive coercion and surveillance by governments and industry, would damage social cohesion and contravene democratic principles and human rights.

Environmental Degradation

Environmental degradation and over-exploitation tend to disproportionally affect marginalized communities in LMICs and HICs.

A/IS provides the capability to analyze large interconnected databases that is essential for carrying out many environmental studies. However, as stated in [21], such systems, in particular data-intensive systems, may require massive computational resources that consume considerable amounts of energy, and therefore have a very large carbon footprint. Their impact on the SDGs concerned with mitigating climate change is therefore clearly negative.

In addition, the same aspects of A/IS that enable them to contribute to increasing productivity, as discussed above, may also lead them to contribute to over-exploitation of resources.

Mindset Limitations of the Technology Developers

Applications reflect the needs and values of the nations and the social classes that develop them. Thus, A/IS R&D is inevitably biased toward those topics that are most relevant to the nations and populations from which their researchers and professionals are drawn. Since A/IS technologies oriented toward countries without a strong A/IS research presence are few and far between, A/IS will tend to increase inequalities not only within countries but also between different countries.

It is now widely recognized, (see [7]), that big data is a potentially disruptive technology that will reflect and amplify the social biases, discriminations, and stereotypes embedded in the data, to the detriment of inclusivity. Moreover, the view of data science professionals is also skewed by their belonging to small social groups that are lacking in diversity (generally white men of well-to-do classes from HICs).

We conclude that, although the potential of A/IS to contribute to achieving the SDGs would appear to be significant, not every development of A/IS will be fit for this purpose. A good deal of additional research is needed to measure the impact of A/IS applications on equity. A laissez-faire attitude that leaves A/IS research to be driven exclusively by commercial interests would almost inevitably contribute to increasing, rather than decreasing, inequality.

Solutions need to be developed for specific problems and conceived for specific contexts, based on a thorough knowledge of the region, culture, and values in question, with the object of maximizing the possibilities of adoption, instead of simply transferring solutions conceived for high-income populations.

It is well-known in the cooperation for development field that ICT that is not appropriate for a target community can be detrimental to social and economic development, contributing, in particular, to increasing inequality. How to avoid such an outcome in AI4Eq? A sagacious response would be to start by using the knowledge and experience accumulated in the field of development studies, in general, and ICT4D, in particular.

Emergence of AI4Eq

The well-established field of ICT4D has made significant progress in ensuring that ICT contributes to development [33]. Many studies are dedicated to identifying critical success factors for, and barriers to, the use of ICT in LMICs (see [34]). Surveys of ICT4D literature such as those of [35]–[36][37] point to the usual problems of poor infrastructure and lack of human capital. In recent years, there has been an upsurge in interest in ethical guidelines for ICT4D research, development, and deployment—an area which remained underdeveloped according to [38] and [39] —in concordance with the human rights-based approach (HRBA) to development defined by the UN [12]. In this approach, human rights standards and principles provide a framework to ensure that the target community fully benefits from development efforts.

An HRBA applied to ICT4D would require that all possible beneficiaries participate in the design of the ICT, that the ICT respond to the principle of nondiscrimination, both in its design (ensuring voices and data from marginalized populations), and deployment (universal benefit). Finally, the HRBA would require that accountability mechanisms are embedded into both the ICT design process and the monitoring of the impact on the community’s well-being. Indeed, accountability is at the core of the HRBA, and thus has a special resonance with ethical ICT. These criteria engender important research questions to which, in our opinion, not enough attention has been paid in the academic sector. In fact, there is a scarcity of ICT4D contributions in general in high-impact conferences and journals, as observed in [37] and [20]. One possible reason for this concerns the allocation of research funding. Not only does obtaining specific finance give a tremendous impetus to research, the procurement of significant research funding from public-sector national or supranational bodies is generally, and not unreasonably, viewed as a research seal of quality, not least by journal editors and referees. At the same time, disbursement of funding from these public-sector bodies is determined according to preestablished research line priorities. The corporate sector has a substantial influence on the definition of these research-line priorities [40] which may help to explain why, to date, ICT4D has not been among them. Tendencies to value some research over other research can then be amplified in a feedback loop often described by the term “Matthew effect” [41].

AI4Eq occupies a particular area within ICT4D due to the very significant ethical and philosophical problems and dilemmas that it gives rise to, and to the fact that many of the risks associated with ICT, in general, are magnified in the case of A/IS. An integral approach, based on development studies and the principles of the HRBA, is especially pertinent in this field. Already in the pioneering applications described in [19], it was observed that the best outcomes were when social, economic, political, and cultural, dimensions were integrated into the engineering and design process.

However, within the ICT4D literature, the presence of A/IS remains relatively low, despite its potential to contribute to sustainable development now being widely recognized. For example, of the 74 articles published in the International Journal of Information Systems in Developing Countries (IJISDC) in 2017–2018, only one, concerning mHealth, directly involves A/IS, and of the 58 articles published in the IJISDC in 2019–2020 (to date), only two, concerning the application of big data, directly involve A/IS. Possible reasons for this relative lack of representation of A/IS in the ICT4D literature are the historical preponderance of Information Systems in the ICT4D field, though this is changing, and the lack of A/IS infrastructure and A/IS-trained human capital in LMICs.

On the other hand, though the journals and conferences of the A/IS community contain many publications concerning the use of A/IS in LMICs, in general, these lack development studies and HRBA perspective. The AI-D Spring Symposium [42] constituted an early attempt to stimulate the field of AI for development in the academic sector but, unfortunately, was held only once.

The last few years have seen the emergence of initiatives in the academic world that clearly fall in the field of AI4Eq such as the NeurIPS Joint Workshop on AI for Social Good.1 Even the IEEE [7] emphasizes that design methodologies and tools should take into account social, as well as economic, criteria and that the impact of deployed systems on user and community well-being should be monitored. Of course, as is also recognized in [7] and [18], methodologies and tools cannot be a replacement for legislation or manuals of ethics and good business practices, but can and should support their implementation. Academic research, private sector self-regulation, and legislation are necessary and complementary actions. But it is important that research and development methodologies and tools comply with universal ethical principles, since legislation may be lax or unclear in some contexts, this being more likely to occur in LMICs and when the most vulnerable are concerned. In the academic field, there is a need for independent and scientifically rigorous research in the field of AI4Eq with an empirical dimension since, to date, this dimension is mostly lacking.

Way Forward for AI4Eq

As stated earlier, we consider that there is an ethical imperative for educational and research institutions to harness AI technologies to the benefit of humanity to improve quality-of-life for all rather than contribute to perpetuating systemic injustices. To this end, more R&D in the potential of AI to contribute to the SDGs is urgently needed.

With the present article, we seek to encourage the academic community to get involved in AI4Eq educational programs and in inter- and multidisciplinary research in AI4Eq that involves, in particular, civil society. We also argue that an HRBA and development-oriented approach greatly increases the chances of having a positive impact on the satisfaction of the SDGs.

In our opinion, establishing the AI4Eq field should involve the following phases: revising the state of the art, identifying the issues of particular relevance for AI4Eq, and finally identifying methodological and technological tools that are of particular interest for AI4Eq as well as disseminating AI4Eq examples of good practice (see Figure 1). In the next section, we present the first exploration of this way forward. Before doing so, in the rest of this section, we discuss the relevance of multilevel and multiactor cooperation.

Fig. 1. - Establishing the AI4Eq field.

Fig. 1. Establishing the AI4Eq field.


AI4Eq Through Multilevel Action and Multiactor Alliances

The SDGs are focused on equity, inclusiveness, and sustainability so should intrinsically be addressed from the perspective of cooperation for the common good rather than from a perspective of competitiveness. The SDGs promote multilevel action and multiactor alliances. The UN Agenda 2030 calls on institutions, governments, public, and private sectors, NGOs and other civil society organizations, and universities to commit to the development and viability of the SDGs, that is, to participate together, in the design and application of collective solutions to collective problems. This is made explicit in goal 17, “PARTNERSHIPS: Revitalize the global partnership for sustainable development.”

The way to AI4EQ should reflect this spirit of universality, of collective transformational effort, avoiding the global north–south dichotomy that has often characterized cooperation for development projects. Collaboration is enriched by different actors who provide different skills, sensitivities, experiences, and perspectives but this will require defining methodologies and instruments that articulate this collaboration.

Private Sector Involvement

This issue has been brought into focus by projects involving the use of data science in humanitarian emergencies, see [22]. Much of the valuable data that is relevant in the development context is owned by the private sector, for example, digital-cash transaction data and location data owned by mobile-phone operators. Data such as this and social media data maybe a useful source for extracting indicators of human well-being. The concept of “data philanthropy” has been coined to describe the donation of data for a good cause by private sector actors [22]. Corporate social responsibility needs to accompany private sector involvement. On the other hand, the question of data ownership urgently needs to be reassessed. The rapid development of big data and the lack of understanding of its implications among the general public has led to it being blithely assumed that citizen-generated data belongs to the companies that capture it, with little or no thought to the consequences for the public interest.

Citizen science and community-centered approaches

Citizen science [43], [44] is a governance mechanism that allows society to capitalize on the potential of A/IS, while ensuring respect for human rights. The approach of citizen science incorporates ethical values such as knowledge sharing, reproducibility, participation, and representativeness in the development of A/IS. As already mentioned about data science projects, bias, both in the data and in the beliefs of the professionals who manage it, can contribute to accentuating inequalities. Citizen science is one way of avoiding such bias. Additionally, citizen science provides comprehensive training that includes ethical awareness on how data science projects affect vulnerable populations and, by so doing, fosters confidence and motivation in A/IS. Thus, at the same time as developing key competencies, participants acquire representation in data science projects, thereby mitigating the risk of the technology contributing to exacerbating inequalities.

Education of future professionals through AI4Eq transversal competencies and service learning

A/IS education should avoid an overly technocratic and techno-optimistic approach and should comprehensively address the impact of these technologies on equity and justice, raising awareness of the ethical risks and sociopolitical implications that their use entails.

Higher-education students at the undergraduate, master’s, and doctoral levels could act as providers of A/IS services to disadvantaged communities in the context of service learning (SL) projects [45]. The ethical-civic education that is characteristic of the SL paradigm constitutes an additional angle from which to contribute to the objective of reversing inequalities. The SL pedagogical paradigm enables the application of knowledge and concepts, and practice of curricula competencies in a real context, emphasizing on multidisciplinary, interpersonal, civic, ethical, and metacognitive competencies, fostering critical reflection on social transformation possibilities, and rethinking professional goals from a wider perspective.

Exploring the Way Forward for AI4Eq

In the next section, by way of illustration, we present the first exploration of our view of the way forward for AI4Eq, based on a small study of SDG-oriented multidisciplinary AI projects.

Brief Overview of Some A/IS Applications in Sectors Crucial to the SDGs

First and foremost, we believe it to be of primary importance to study the current panorama of A/IS applications in sectors crucial to the SDGs. The goal of this essential step is to document and disseminate the lessons learned in developing and deploying the most significant innovative applications, identifying strengths, weaknesses, and success factors.

Attention should be drawn to the idiosyncrasy of each LMIC or particular context (cultural, climatic, environmental, organizational, infrastructural, socioeconomic, etc.) and the particular impact AI4Eq-based innovation can have in each context. Reflections on ethical problems arising from the use of the technology, possible acceptance problems in a specific context, impacts on culture, sustainability, gender digital divide, etc. would also be of great value, especially if anchored in impact measurement using quantifiable metrics associated with compliance with the SDGs.

In [20], we illustrated how some of the problems discussed in the previous section can be addressed. We performed a brief review of SDG-oriented

A/IS projects and studied three of the reviewed projects more closely. In order for their impact to be clear, we chose illustrative examples that are relatively large-scale, without wishing to imply that small-scale projects cannot be examples of good practice. Though one of these projects could be considered to be somewhat outdated, on the other hand, this has enabled its impact to be thoroughly assessed.

The objectives of the ESs for improved crop management project, funded by Food and Agriculture Organization (FAO) of the United Nations and UNDP and promoted by the Egyptian Ministry of Agriculture, were twofold: to develop two ESs aimed at increasing yield and quality of cucumber and citrus production, and to build the Central Laboratory for Agricultural Expert Systems [46]. The objective of the Nile basin decision support system project [47] was to support regional and transnational decision-making for the management of the Nile basin resources. Finally, the aim of the project presented in [48] was to confirm the number, time, and place of spontaneous refugee returns to the Casamance region in Senegal using call detail records made available in the context of Orange Telecom’s Data for Development challenge.

From our review of relevant projects and from the success factors for AI4Eq-oriented A/IS, and for ICT4D in general, identified in the literature, in the following sections, we present, first, the issues and, secondly, the methodological and technical tools that we consider of particular relevance for AI4Eq. In doing so, we also take into account the principles of HRBA and the general principles of good practice in development cooperation projects. Our proposal is a commitment to the essence of the SDGs—recognizing that development and sustainability may be in opposition and need to be balanced—from a multidisciplinary and integrative, as opposed to a technocratic, viewpoint.

Some Issues of Particular Relevance for AI4Eq

Below we discuss issues we consider particularly relevant for AI4EQ, which dictate the methodological and technical tools needed.

Alignment with the SDGs that are Primary in Each Case

A/IS should contribute to the fulfillment of those basic human rights and the satisfaction of those unmet basic needs that are most affected in each case.

Use of the HRBA

Ethical AI should be respectful of fundamental human rights and of the particular values of the culture in which it is deployed, and should take into account the idiosyncrasy of each context. This requires an inclusive, participatory and culture-aware approach to project development. Extensive stakeholder consultation is needed, paying attention to the representativeness of the participants and, in particular, attempting to ensure the presence of habitually marginalized communities diverse in gender, economics, culture, and value. In data science projects, if stakeholders provide not only their data but also their opinion, participating, where possible, in data processing and interpretation, it would help to avoid inherent bias in the data and would favor a balance of power. Stakeholders who feel that their views, values, and needs are taken into consideration in technological progress are more likely to have a favorable attitude toward A/IS. HRBA also underlines the need for accountability, in which impact measurement plays a significant role.

Impact Measurement

This should be carried out with a wide but integrative perspective, taking into account multiple dimensions—sociopolitical, geographical, climatological, environmental, etc.—and paying attention to social costs, the impact on the workplace, and consideration of the values of the culture in which the technology is to be deployed.2 Indicators related to specific development priorities in LMICs and associated with compliance with SDGs are of prime interest. As stated in [8], to date, large-scale impact assessments of A/IS have not been carried out and, in fact, no well-established monitoring tools and assessment methodologies are currently available for this purpose.


In addition to worrying about short-term productivity, the following aspects also need to be addressed: acceptance, continuity, maintainability, and long-term viability. This will involve knowledge transfer, the establishment of the necessary infrastructures and could also involve the promotion of local research.

Promotion of Autonomy/Empowerment

A/IS should contribute to self-sufficiency and to the empowerment of small communities and the small-scale economy. Care needs to be taken, in this regard, with any use of energy-intensive big-data-based solutions.

Taking into Account the Legislative and Political Environment

As stated in [22] experience has shown that successful implementation of AI4Eq projects must take into account the legislative and political environment and the importance of institutional support. Loss of privacy is of particular concern in LMICs, where laws concerning data privacy and security are often weak. The right to privacy is sometimes viewed as a luxury that LMICs cannot afford whereas, on the contrary, it may be of even more importance there than in HIC since in countries characterized by conflict, crisis, and weak law enforcement, privacy risks may become security risks (i.e., threats to life and liberty).

Taking into Account the Ethical, Psychological, and Philosophical Considerations that Arise in A/IS Development and Deployment

Other issues concerning A/IS are also relevant, particularly in regard to how they affect vulnerable populations, among others: privacy, explicability, transparency and traceability, openness, robustness, safety, affordability, culture-awareness, and nondiscrimination.

Some Relevant AI4Eq Tools and Paradigms

There is general agreement in A/IS ethics forums that ethics must be embedded in research and production processes [8]. This requires proposals of methodological and technical tools at all levels of research and at all levels of development processes (analysis, design, implementation, deployment, and evaluation), focused on guaranteeing the properties of ethical AI in compliance with laws, regulations, and policies. Ethical A/IS should address the five basic ethical principles: 1) beneficence; 2) nonmaleficence; 3) autonomy; 4) justice; and 5) explainability. For the reasons set out in the previous sections, we consider that these principles are of particular import and have a special character in the case of R&D that is focused on LMICs and pockets of poverty in HICs, that is to say, on protecting and empowering the most vulnerable and marginalized. It would be advantageous if the different actors of the multidisciplinary AI4Eq field, and even society as a whole, be involved in the evaluation of the aforementioned R&D methodologies and tools.

Promoting designs that facilitate transparency and auditing, and enabling a participatory approach will require a combination of tools and techniques. We note that this participation also serves as an education in A/IS competencies, combatting A/IS illiteracy and raising awareness of the ethical risks involved. The participation of A/IS students in AI4Eq R&D projects would also contribute to the ethical and socio-political awareness of future professionals.

Although the area of tool support is still in an immature state, there are already some that, despite their limited scope, are effective. For example, in the area of methodological tools, the chapter on Ethical Guidance for a Reliable AI from the European Commission’s expert group provides a guide to the implementation of intelligent systems, laying out requirements for satisfying different ethical principles. Morley et al. [8] compile a set of tools that support the development of ethical A/IS. Some of these tools could be particularly useful in the AI4Eq field. In addition, methodological and technological tools used in other fields could also be very useful in AI4Eq. Below we discuss tools of both types.

Development Intervention Tools

AI4Eq should take advantage of existing frameworks and methodological tools for the conception, design, implementation, deployment, monitoring, and impact assessment of development interventions—in particular, those used in the ICT4D field—by adapting them to A/IS projects, and then integrating them into A/IS tools. Among other things, such frameworks and tools are important to achieving long-term operation and maintenance.

Techniques that Require Few Energy Resources

As has been pointed out in the literature, A/IS sustainability requires more efficient data centers using renewable energy [21]. Conditions in which energy resources are scarce, oblige the use of sustainable, low-cost, “appropriate” energy solutions. Sustainability criteria also suggest returning to the expert model-based paradigm (i.e., classic ESs) instead of the big data paradigm. As stated in [21], “the human brain consumes much less energy than is required to train AI models with massive amounts of data.” In many cases, this intensive training is not really necessary to have models that work with more than acceptable accuracy. The investigation of models that integrate both paradigms in an acceptable compromise imposed by sustainability requirements is of significant interest to the AI4Eq field.

Data Science Guidelines

In the field of data science, guidelines exist for taking into account the risks at each stage of the development cycle (requirements engineering, design, curing or preprocessing of data, application of learning algorithms, implementation, testing, deployment, evaluation, monitoring, and maintenance). The study of these risks should be an integral part of the data science curricula. The use of big data for humanitarian purposes presents particular challenges, mainly concerning privacy, that have been addressed using different tools. Thus, Pastor-Escuredo et al. [48], reporting on the use of big data in humanitarian emergencies, point to the importance of using protocols involving risk assessment and mitigation, such as the UNHCRs data privacy policy or the UNGP-UNDP guide to data innovation for development, to guarantee privacy. Technological and methodological tools were used to carry out aggregation, anonymizing, and filtering of data to minimize the possibility of reidentification of individuals.

Explicability Tools

The need for A/IS to be able to provide an explanation for their decisions is particularly acute in the AI4Eq context. However, there are numerous difficulties and challenges in this area, to name a few:

  • deciding exactly what an explanation is, since this may be a little obscure in the case of machine learning;
  • defining a notion of quality for explanations, especially to decide when an explanation is considered to be of sufficient quality;
  • ensuring that explanations are understandable to their audience, sometimes the general public.


Low-cost A/IS

This refers to solutions such as lightweight mobile applications or free or open source software (FOSS). Also relevant in this context is the “Open AI” paradigm (see [50]), which refers not only to FOSS but also to the application of FOSS principles to algorithms, scientific knowledge, or other A/IS artifacts. Liberalizing the applications and following free software principles could contribute to making the benefits of A/IS more accessible. However, while on the one hand, the availability of FOSS source code facilitates checking compliance of developed A/IS software with ethical standards, on the other hand, the FOSS development and distribution process makes following ethical guidelines and standards in the A/IS software development process difficult to impose and hinders the certification of compliance of the development process with such guidelines and standards.

Context Aware Systems

Not all ethical principles are equally important in all contexts and cultures. Adaptation to the culture where systems are deployed (see [51] is an aspect that should be incorporated in R&D methodologies as a cross-cutting functional requirement of these systems. User-centered design methodologies would benefit from tools for the bidding of social values and, according to [8], current tools remain at a very abstract level and are not instantiated in practical guidelines.

Other Paradigms and Tools of Interest

Finally, other tools and fields of research, such as value- sensitive design [52], privacy-by-design [53], and ethics-by design [54], are being developed for ethical A/IS in general and would be very relevant to AI4Eq. We believe it would also be of interest to explore similar approaches such as impact measurement by design, equity-by-design, rule-of-law by design, democracy-by-design [55], etc.

Technological innovation demands a specific approach to ensure that it contributes to the SDGs, in particular, avoiding a purely economics-based approach that would convert sustainable and equitable development into an oxymoron. The ICT4D community has, until recently, shown insufficient interest in A/IS, while the A/IS research community has, on the whole, shown a lack of awareness of any need to integrate development studies and HRBA perspective. We conclude that more R&D in the potential of AI to contribute to the SDGs, particularly for vulnerable and marginalized populations, is urgently needed, via the consolidation of a field that benefits from the large body of accumulated knowledge and experience in development studies, and ICT4D, and takes into account the full range of specificities of A/IS, including those of an ethical, psychological, or philosophical nature. A development studies perspective and the measuring of impact in terms of the SDGs, especially those that are most relevant to equity, is crucial to avoiding such a disruptive technology as A/IS having detrimental effects on equity, a real and present danger given the current techno-optimist Zeitgeist and the enormous power of the ICT corporate sector.

We therefore call on researchers to join the effort to build and strengthen the AI4Eq field by appealing to universal ethics but also to the idea that a lack of human rights in one region has global consequences, thus resolving issues of gross inequality in the interest of all. A/IS has the potential to make a giant contribution to the “inescapable transformation” [29] implied by the Agenda 2030, through multidisciplinary, multiactor AI4Eq collective action.


The work of Simon Pickin was supported in part by the Spanish MINECO-FEDER (FAME) under Grant RTI2018-093608-B-C31 and in part by the Region of Madrid (FORTE-CM) under Grant S2018/TCS-4314.

Inteligencia Artificial, ETSI, UNED, Madrid, Spain
Angeles Manjarrés is a Lecturer with the Department of Artificial Intelligence, UNED, Madrid, Spain, the Spanish national distance-learning university. Her research is focused on the field of ontologies and educational recommender systems, in e-learning and educational robotics.
Ms. Manjarrés has participated in national, European, and international research projects, and in educational innovation projects integrating service-learning methodology into artificial intelligence studies. She is a member of the UNED Service-Learning Department management team, of the UNED Research Ethics Committee, and of the Education for Development Group EDETIC, affiliated to the Innovation and Technology for Development Center of the Technical University of Madrid.
Sistemas Informáticos y de Computación, FDI, UCM, Madrid, Spain
Simon Pickin completed undergraduate studies in mathematics from Sussex University, Brighton, U.K., postgraduate studies in mathematics from Cambridge University, Cambridge, U.K., and King’s College London, London, U.K., and postgraduate studies in computing from Imperial College London, London, U.K., and Rennes University, Rennes, France.
He has worked as a Researcher in the public and private sectors in France and Spain for the last 30 years. He is currently working as a Lecturer at the Computing Faculty of the Complutense University of Madrid, Madrid, Spain. His research interests focus on formal methods and testing, as well as on ICT4D.
Inteligencia Artificial, ETSI, Madrid, Spain
Miguel A. Artaso received the research M.Sc. degree in artificial intelligence from UNED, where he is currently studying the Ph.D. degree in evaluating the quality of life of cochlear implant users.
He is a Senior Data Science Engineer with Cochlear, Basel, Switzerland, where he researches the use of artificial intelligence to aid the fitting of cochlear implants.
Prof. Artaso is also a member of the Spanish Interuniversitary Teaching Innovation Group miniX-modular.
Harvard FXB Center for Health and Human Rights, Boston, MA, USA
Elizabeth D. Gibbons received the graduation from Smith College, Northampton, MA, USA, and Columbia University, New York, NY, USA.
She is currently a Senior Fellow with the FXB Center for Health and Human Rights, Harvard T.H. Chan School of Public Health, Boston, MA, USA, where she participates in initiatives which leverage her expertise in advancing the human rights of children and adolescents. These have included the development of a cross-disciplinary child protection curriculum for graduate students, an online HarvardX and Executive Education courses for child protection professionals, which she managed in her capacity as Director of the Child Protection Certificate Program. Since 2014, she has been engaged in the exploration of artificial intelligence (AI) and its impact on human rights, with particular attention to the potential for these technologies to affect inequality within and between global societies. As Chair of the Sustainable Development Committee, she led development of a chapter within IEEE’s publication Ethically Aligned Design, 1st ed., which identifies issues and makes recommendations for ensuring AI benefits humanity by contributing to the attainment of the UN Sustainable Development Goals. Prior to her academic appointment at Harvard FXB she enjoyed a lengthy career in international development, primarily with the United Nations Children’s Fund (UNICEF), during which she lived and worked in Togo, Kenya, and Zimbabwe, and served as the Head of UNICEF’s offices in Haiti and in Guatemala. She also held several positions in UNICEF’s New York Headquarters, including Acting Director, Emergency Operations; Chief, Global Policy; and Deputy Director, Division of Policy and Practice. She is fluent in French and Spanish, and the author of Sanctions in Haiti: Human Rights and Democracy under Assault, as well as a contributing author to several other books.
To read the PDF version of this article, including references, click HERE.