Carlos Ignacio Gutierrez, Gary E. Marchant, and Katina Michael
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government , . The articles herein were selected from a project funded by the Charles Koch Foundation and administered by Arizona State University. Through this initiative, academics and representatives from the private and nonprofit sector were invited to a series of workshops.The first workshop took place in Washington, DC, on January 9, 2020, where individuals participated in a roundtable discussion on special topics related to “AI Governance and Soft Law” and were asked to submit articles for initial review on a number of theoretical and applied areas. A second workshop was held virtually on October 9, 2020, providing a forum for the presentation of preliminary articles and feedback. A number of these articles were further developed and selected for peer review, and a subselection made it into the final double special issue.
Soft law programs are characterized by the creation of substantive expectations that are not directly enforced by government.
The aim of the workshops was to explore ideas on how to improve the trustworthiness and effectiveness of soft law in an effort to maximize future benefits and minimize the drawbacks of AI methods and applications. In essence, they asked how could society harness the power of AI, sooner rather than later, via the adoption of soft law? This collection demonstrates the breadth and depth of nascent scholarship in the field. In fact, we believe that these are the first special issues ever published in the area of AI Governance and Soft Law with the added importance of being released within a technical communications journal.
Soft law programs
This special issue represents the culmination of our work to highlight ideas and lessons on how to create, amend, or adopt soft law programs meant to stand the test of time. We began this project by analyzing soft law precedents in emerging technologies and identified incentives for their successful implementation –. Subsequently, we found existing soft law mechanisms devoted to methods and applications of AI and examined their resultant findings , . With the articles in this issue, our team set our sights into the future and compiled contributions that could help stakeholders in any sector generate effective and trustworthy AI soft law governance tools.
How can society harness the power of AI, sooner rather than later, via the adoption of soft law?
Considering the above, the articles in this special issue are divided into three sections. We begin with high-level norms or principles that organizations should champion to gain acceptance for AI soft law programs within a community of stakeholders. This is followed by the presentation of program archetypes whose design encourages soft law program implementation via the alignment of incentives. Last, we underscore the interplay between soft law and its counterpart, government regulations (also known as hard law). Rather than being at odds with each other, the articles within the special issue find instances where soft and hard law efforts are complementary.
Our special issue contributors suggest two high-level norms or ideals that are essential for an AI soft law program’s success: trust and credibility. Hilary Sutcliffe of Society Inside and Samantha Brown, co-founder of Consequential, argue in their article that trust in how an organization manages technology is characterized by the display of seven drivers that are “deeply rooted in … individual and collective psychology”: focus on the public interest, competence, respect, integrity, inclusion, openness, and fairness. To build these drivers, the authors offer tangible suggestions on how an organization can earn trust and effectively communicate an AI soft law program’s objective.
Craig Shank, formerly of Microsoft, turns his attention to offering strategic planning considerations for generating credibility in an AI soft law program. His article centers on the legitimacy and accountability of an organization to its stakeholders. Specifically, three factors should be taken into account. First, appropriate participants must be included. This means that internal and external parties that should be visible and have a voice in a program need to be seen and heard. Second, the process must withstand the scrutiny of participants by balancing its transparency, integrity, and flexibility. Last, a program’s credibility depends on its output. If its ultimate usefulness is in question because of its perceived lack of technical quality, or if it disregards the input of relevant participants, then it is bound to be viewed negatively and serve little to no purpose.
This special issue represents the culmination of our work to highlight ideas and lessons on how to create, amend, or adopt soft law programs meant to stand the test of time.
Guaranteeing an AI soft law program’s enforcement or compliance requires that designers think about the alignment of incentives. To this end, our special issue contributors suggest program structures that address the—sometimes conflicting—motivations of stakeholders. Simply put, it is not enough to agree on broad principles. There must be appropriate drivers and incentives in place or created for entities developing and using AI to responsibly implement these ideas. To this end, Daniel W. Linna Jr. of Northwestern University discusses how soft law governance in the legal industry can have downstream consequences in improving the “effectiveness” of technology solutions. In the United States, the activities performed by lawyers are regulated by the American Bar Association (ABA) and its Model Rules of Professional Conduct. Through his analysis, Linna offers detailed suggestions for how the ABA can generate norms for its profession that compel technology suppliers to improve their AI-based technologies.Lucille N. Tournas and Diana M. Bowman of Arizona State University turn to a proactive medium that can shape AI governance: insurance. As a risk mitigation tool, it represents an agreement where a firm promises to cover the losses, liabilities, or damages experienced by another entity due to certain activities. To uphold their responsibility, insurance companies can assert their influence by demanding the implementation of standards and norms. The authors describe a relevant precedent for this tool in emerging technologies in the form of nanotechnology.
Sara Jordan of the Future of Privacy Forum devotes her attention to review boards, bodies that ensure compliance with principles or standards in the development of AI research or applications. Her piece identifies three challenges that any AI-specific review board must address for it to have impact. First is the need for authority or the capability to compel action. Second is the determination of scope or what segment of professionals should be included in the evaluation of AI subject matter. Finally, AI can have lasting effects on an institution and society, hence a board must define the extent of its risk assessment. On reflection, Jordan raises some important matters with respect to AI innovation itself. A follow-on question that remains is, if it is expected that academic researchers need to go through Institutional Review Boards (IRBs), then how much more is needed for a technology that is being unleashed into the open market and has the capacity to impact thousands of businesses and millions of people. A determination needs to be made on what mechanism should be implemented for the approval of these technologies that is beyond the internal controls of an organization and the limited international governance scrutiny .
Our contributors suggest program structures that address the—sometimes conflicting—motivations of stakeholders.
On the international front, Walter Johnson and Diana M. Bowman of Arizona State University examine instruments available to set a global governance agenda for AI. Although alternatives exist in the form of international law, framework conventions, intergovernmental organizations, and public–private partnerships, the authors see standards as the future of international governance. They argue that these tools, created by multiparty bodies, are generally able to allay the legitimacy and participation concerns present in alternative forms of global efforts to manage AI.
Soft and hard law approaches can complement each other to manage AI applications and methods. Two of this issue’s contributory articles evince this synergy. Emile Loza de Siles of Duquesne University offers the U.S. National Institute of Standards and Technology (NIST) Cybersecurity Framework (https://www.nist.gov/cyberframework) as an example of a soft law program that lays the foundation for hard law requirements in several federal agencies. Considering this, she proposes that NIST undertake a similar effort to create bias and discrimination standards for AI.
The article by Neil A. Chilson and Taylor D. Barkley of the Charles Koch Institute suggest a two-tiered strategy in the U.S. governance of facial recognition technologies (FRT). On the one hand, government utilization of FRT should be subject to hard law that guarantees the protection of civil liberties. On the other, the continuously changing nature of commercial FRT applications should rely on soft law to dynamically address privacy and bias issues. Last, we have an article by Jordan Buckwald and Gary Marchant of Arizona State University titled “Improving Soft Law Governance of the Internet of Things.”
The ideas contained within this issue should provide stakeholders with a new perspective to consider. One that views soft law programs as processes and practices that are instituted in between the time it takes for a technological innovation to pass over the “valley of death” period into the nascent stages of commercialization and initial wider diffusion , . By the provision of emergent standards, industry guidelines, or frameworks, it can be argued that there better be “something” that stakeholders can cling on to in terms of soft law, rather than a complete absence of structure, direction, and acceptable application.
Our objective is to provide stakeholders with ideas to consider when building effective and trustworthy AI soft law programs.
This phenomenon was particularly prevalent in the early phases of diffusion of automatic identification technologies, including the introduction of biometrics for applications beyond law enforcement . About the importance of standards in practice, Michael  wrote: “it is without a doubt that the BioAPI Consortium activities placed pressure on the International Standards Organization (ISO) to develop formalized biometric standards to assist with the proliferation of biometric applications worldwide. Without a common language, the implementation of automated recognition systems would have been severely inhibited.” Today, we see the greater complexity of socio-technical challenges surrounding AI that are triggering legal disputes. Technological convergence now means that we have automated facial biometrics possible through firmware updates (containing machine learning algorithms) in closed-circuit television that make use of billions of publicly sourced web-scraped images able to identify close matches. In some places, total or partial bans have been accepted , while in others, open innovation continues to drive roll-out and adoption .
This special issue’s objective is to provide stakeholders with ideas to consider when building effective and trustworthy AI soft law programs. In this regard, we would like to strongly encourage organizations to think about the sustainability of their efforts and avoid devoting resources to the creation of initiatives that are not intended to be enforced or implemented. Overall the special issue contributors have revealed that trust and credibility are fundamental principles in the design of AI soft law and that, when selecting program archetypes, it is imperative to consider the alignment of incentives between parties. Finally, the exclusive use of soft law is not the answer to every governance problem. Instead, a flexible approach that complements or substitutes for hard law can create effective synergies for the governance of AI.
Guest Editor Information
Carlos Ignacio Gutierrez is a Governance of AI Fellow at the Sandra Day O’Connor College of Law, Arizona State University, Phoenix, AZ, USA. Gutierrez has a PhD from the Pardee RAND Graduate School, Santa Monica, CA, USA.
Gary E. Marchant is a Professor at the Sandra Day O’Connor College of Law, Arizona State University, Phoenix, AZ, USA. Marchant has a PhD from the University of British Columbia, Vancouver,
BC, Canada, and a JD from the Harvard Law School, Cambridge, MA, USA. He is the Working Group Chair of the IEEE P2863, Recommended Practice for Organizational Governance of Artificial Intelligence. He is a Member of IEEE.
Katina Michael is a Professor of the School for the Future of Innovation in Society and the School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA. Michael has a PhD and an MTransCrimPrev from the University of Wollongong, Wollongong, NSW, Australia. She is the Working Group Chair of the IEEE P2089, Age Appropriate Digital Services Framework. She is a Senior Member of IEEE