For Richer, for Poorer – The Digital Economy

By on December 27th, 2020 in Artificial Intelligence (AI), Ethics, Human Impacts, Interview, Magazine Articles, Social Implications of Technology, Societal Impact

An Interview With Geoffrey Goodell

Geoffrey Goodell is currently a Senior Research Associate in Computer Science at University College London (UCL). He studied mathematics at M.I.T., before completing his Ph.D. at Harvard University in computer science where his focus was on decentralized systems and Internet governance. Prior to his position at UCL he was an entrepreneur and portfolio manager with a decade of experience in the financial industry. He worked for Goldman Sachs in New York, before becoming Partner and Chief Investment Officer of Phase Capital, an asset management firm, in Boston, where he developed and managed a systematic macro strategy for institutional clients. Geoff additionally has an M.B.A. degree from the University of Oxford, is a CFA charter holder, a committee member of the British Standards Institution, and Convenor of the ISO working group on Foundations of Blockchain and Distributed Ledger Technologies. His recent work focuses on digital payment systems, identity, marketplaces, and regulation.

This interview with Geoffrey Goodell took place in London, U.K. on January 15, 2020. The interview was transcribed by Kristina Milanovic and adapted again in London, on September 3, 2020, by Geoffrey Goodell and Kristina Milanovic in preparation for it to appear in print.

The term “digital economy” refers to economic activity resulting from the emergence and use of digital computing technologies. Broadly speaking, this can be any economic activity taking place between people (and organizations) that is facilitated by the Internet, mobile technology, or the “Internet of Things” [1]. There has been an increased preponderance of discussion of the digital economy in recent years, and especially in the past few months wherein an increased number of individuals have been required to work from home.

What Do You Think Are the Main Drivers of the Digital Economy as a Whole?

GG: What are the actual drivers? Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies. The idea that technology leads development is not necessarily true, for example, consider AI. It has been a topic of interest to researchers for decades, but only recently has the funding caught up, matching the motivation and enabling the development of AI-oriented technologies to really take off.

Last year the U.K. ranked fifth [2] in the European Commission’s Digital Economy and Society Index (DESI), which tracks each country’s digital performance and competitiveness using five key rankings: connectivity, human capital, use of Internet services, integration of digital technology, and digital public services. The EC is encouraging all countries to improve their performance in these areas to be able to compete more actively in a global environment.

There are Advantages to Encouraging Countries to Develop Their Digital Economies, But Do you Think There are Any Risks of Countries Misusing or Abusing It as they Develop?

GG: One of the major risks is surveillance capitalism. It persists for a number of reasons. First, it is profitable. Wealthy interests want for nothing but to manipulate behavior of populations cheaply and at scale, and they will pay dearly for it. The economic motivations are explained well by Shoshana Zuboff [3]. Second, government bureaucracies are often too weak to stop it. The regulation is weak in many countries, and for transnational businesses there is an exploitable lack of clarity on how to enforce rules that vary across jurisdictions or do not cover activities that span boundaries. Third, government bureaucracies benefit from surveillance capitalism. For example, we know that money laundering regulations have been interpreted to require financial intermediaries to identify the counterparties to all transactions and link them to real persons. In the pursuit of fraud, governments might seek to collect, aggregate, and analyze such data about individuals and their activities to identify anomalous behavior. There are two ways to address this: the government can either try to build the surveillance infrastructure themselves, or the government can outsource the transaction tracking to data harvesting businesses. This convenient delegation means they can achieve their aims concerning the tracing but also abdicate responsibility, including accountability to the public, for developing the required surveillance infrastructure to do so. This outsourcing is more effective when the service providers capture a dominant position in the marketplace, so governments that take this approach have a reason to further weaken regulations that preserve competition.

Why Do You Think the Regulation is So Weak When it Comes to Surveillance Capitalism? Do You Think There’s A Lack of Incentive to Enforce the Current Regulations?

GG: Follow the money. Either taxes are too low, or they are not collected effectively. This usually flows through a logical progression, from a foundation that includes valid premises that are unsound at the system level, which leads to intermediate effects in the form of policies and practices, and ultimately unintended consequences. For example, globalization of technology wherein different parts are manufactured or maintained in different regions leads to regulatory arbitrage that results in insufficient tax levels to fund government functions. Another example starts with the premise that subsidizing or incentivizing founders is a good way to entice them to operate in a particular jurisdiction, which leads to insufficient involvement of key stakeholders in the development of new technologies. These developments have been accompanied by a change to the process of procurement from a “request for proposals” (RFP) model to a “platform” model. In other words, there is now an expectation for things to be ready made in advance as opposed to being built to meet specifically stated requirements. As a result, there is now an expectation that public initiatives follow the development of private platforms. Again, Shoshana Zuboff [3] explains this well in her book. Separately, I would also add that Uber or Airbnb are examples of platforms whose proliferation has depended upon exploiting ambiguous or poorly enforced rules to undercut prevailing gatekeepers.

Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies.

In the case of Uber or Airbnb, the regulators were unable to keep up the pace of change of the platform, so the regulations lagged behind its development, but they are catching up now, albeit slowly.

Under What Conditions, if Any, Do You Think a Government, or a Gatekeeper Like a Regulator, Might be Able to Address These Issues Before the Platform Becomes Prolific?

GG: This is difficult because, if a political party advocated a program of reformation and regulation, the platform owners could harness business relationships to get the population to vote against you. Taking an example from finance, there was a lot of debate about the introduction of rules for best execution, which refers to the duty of an investment services firm to ensure the most beneficial order execution on behalf of customers even if that means sending orders to competitors, before the rules came into force. Best execution rules have been incorporated into MiFID [4], as well as the policy of the U.S. Securities and Exchange Commission [5]. In the case of the U.S., although proponents existed since the 1980s [6], some market makers and exchange member firms worked against it, throughout the 1990s and 2000s [7], [8] because it opened an opportunity for smaller exchanges to exist and threatened their market position. The NYSE and other incumbents didn’t want it to go through. You get the idea, this is a slog, with moneyed interests spending years fighting the change, which was ultimately implemented in 2005. Government regulators really struggle to make decisions that would negatively impact large companies. It is a hard sell for taxpayers that short term financial loss will be outweighed by the long term social good.

AI has been a topic of interest to researchers for decades, but only recently has the funding caught up, matching the motivation.

What About Individuals? Do You Think There’s Any Way for Them to Enact Change at a Grassroots Level?

GG: There is a revolving door for the employment of individuals in regulatory organizations and the businesses that need regulating, and few individuals would risk their careers to start a movement. Differences among research disciplines, practices, and gatekeepers mean that societal impact is often not considered by academics holistically along with technical challenges. The government research funding gap in this area is often plugged by technology companies, further exacerbating the privatization of invention. Much investment goes into apps as they give prospective funders the impression that there will be an output that is both tangible and sophisticated, and the fragmentation in research and development funding leads to apps being viewed as a panacea by non-technical researchers. A particular problem with the privatization (and lack of appropriate regulation) of app platforms is that in the long term apps and their platforms have the potential to build dossiers of individual persons through tracking and linking their behaviors and attributes, as data harvesting has become the de facto business model for such platforms.

The Trend of Building Dossiers of Individual’s Data is a Concerning One. It’s Also Become Increasingly Topical, it Seems Like More and More People are Concerned About How their Personal Data is Collected and Stored. Do you Think this Can be Prevented?

GG: The ideal solution would be for technical experts to work with governments to create regulations, with the acknowledgement that policymakers do not have a monopoly on ethical judgement in the design of systems. The problem is that both technical experts and policymakers are routinely courted by interested parties, so finding a solution in the public interest can be difficult. One possible approach is that parts of government that aren’t often exposed to such courting could be involved with the consideration of new system design at early stages to create a groundswell to consider the system change before regulation is formally considered. Overall the difficulty lies at the institutional level. Even if you want to make changes it’s difficult to implement them unless you know that other groups will support you. Robert Jackall [8], [9] makes a great point that above all, managers in bureaucracy seek to avoid blame, so our task is to offer regulators safety and confidence to support positive change. Faruk Eczacibasi, a Turkish entrepreneur, defined “disruption” as “a successful attack on gatekeepers” [10].

How Do You Think this “Disruption of Gatekeepers” can be Enacted in an Institutional Context?

GG: Banks creating data platforms is particularly worrisome. They are attracted to the possibility of monetizing their data, but ultimately this will not help them since data brokerage is not really their key business. Attempting to work with banks to prevent the proliferation of platforms to collect data could be a solution, but we need them to recognize and appreciate the system-level need before we can expect them to drive for this change. “Challenger banks” similar to Monzo and Starling are often subsidized by data collection, and the surveillance capitalism that data collection facilitates [11] ideally should be legislated out. It is important for people to be able to interact in the world without every transaction becoming part of their permanent records. If you have a permanent record then you can be blackmailed because of what’s on it, and access to services can be made easier (or cheaper) or more difficult (or costly) based upon “credit” scores based on data from diverse sources. Card-only businesses are a problem because it means you can’t opt out of your transactions being recorded and sold to data consumers. This is a serious problem. What would happen if cash were no longer printed [12]–[13][14]? Would privacy become a luxury that is only available to the very rich and criminals?

What is the Role of the Individual in this Case? Where is the Personal Responsibility for Their Financial History?

GG: People may not have an option to opt out of certain technologies, for example if they are compelled to use social media because of work. As a result, we need to motivate people to think about their transactions. Currently, we’ve allowed people to not think about it, and the system has been designed to convince them they don’t need to think about what they’re doing online. The motivation behind collecting data is, from the point of view of businesses, that big data, which is to say personal data, can be extremely profitable. The application of artificial intelligence algorithms to big data sets can be extremely profitable. The problem isn’t the development of artificial intelligence algorithms per se, but the data they are being fed and the conclusions that are possible to draw from the data.

Would You have Any Overall Advice to Individuals Concerned About Protecting their Privacy?

GG: Whenever you trust a third party, there is always inherent risk, and as a society, we have chosen to address such risks in a variety of ways. For example, in the financial industry there’s massive regulation to protect investors from exploitation by investment advisors and securities dealers. However, in the context of personal data, regulation generally concerns data protection, which is about preventing the unauthorised use of information once it has been collected. This is not the same as privacy, which is about not requiring such information to be revealed in the first instance. The difference is precisely about third party trust. Trust cannot be imposed; it must be established on the terms of the one doing the trusting. Therefore, systems must be designed with privacy by design, rather than data protection, whenever possible. When it comes to online transactions, for example people signing up with Ashley Madison, there is no regulation to force businesses to implement privacy by design in a manner that allows their customers to transact anonymously, just as there is no regulation that can prevent data breaches. It’s harder to quantify the impact of these data breaches, and in contrast to financial transactions, there is no well-positioned regulator to oversee and structurally limit the scope for interactions involving data, thus limiting the extent to which government can protect individuals.

Why Do You Think There are There no Comparable Protections for Individuals who Take Similar Risks with Their Personal Sata?

GG: The issue lies in the moral edge cases, actions based on an individual’s values that he or she could take that might be antithetical to what the people in power want and could elicit harmful consequences for the individual. How do we give people the confidence and support to stand up for their values? How does one replace raw power struggles among individuals and businesses with the rule of law? These have always been the key questions, and we must revisit them in light of new technologies that allow these interactions to take place in new contexts.

Wealthy interests want for nothing but to manipulate behavior of populations cheaply and at scale, and they will pay dearly for it.

Interviewer Information

Kristina Milanovic is a Ph.D. researcher in the Intelligent Systems and Networks group at Imperial College London, London, U.K. Email: km908@ic.ac.uk.
To read the full version of this article, including references, click here.