To end the ‘techlash,’ AI ethics debate needs to shift

January 30, 2020
Charles Radclyffe.

Whichever platform your smartphone is directed to serve your news from, you won’t have to scroll down too far to observe the concern that is felt towards the application of emerging technologies, artificial intelligence in particular. Article after article speaks of the hubris of Big Tech, and concern that the Brave New World that we are offered requires perhaps a little more bravery than we bargained for.

A decade ago, talk of ethics in technology communities was sneered at. At best you’d hear the token acceptance that “it’s an interesting problem,” or something for “someone else to look at” at “some future point.” The order of the day was to solve the technical challenges and to revel in the early success. Monetising the data was the order of the day. Time devoted to ethics was time wasted, those who championed conversations on the subject: at best, “hippies.”

Today, the companies who are investing the most in artificial intelligence also invest the most in trying to resolve a range of concerns that we describe as “digital ethics.” One might, on the face of it, assume that the problem is under control. After all, if Microsoft, IBM, Google, Facebook, Amazon, and Apple have created ethics boards or ethics councils, then surely enough brainpower has been dedicated to ensure the worst potential outcomes are mitigated. One might think that Asia would lag behind in this area. Consequently, many in the West are surprised to learn that the AI strategy unveiled by the Chinese State Council in 2017 is littered with references to the imperative for ethics to be front and centre of the development, implementation, and application of AI. If both AI superpowers are putting ethics first, then is there anything to worry about?

Despite this increased investment, the problem is growing. Amongst the articles in newsfeeds raising concerns, there will be those that speak of the “techlash.” This term was first coined by The Economist in 2018. Subsequently, scepticism of the positive impact of technology on our lives has ballooned. Some commentators even ask the unthinkable—the reputational damage caused on Facebook by the Cambridge Analytica scandal might pose an existential threat to that company. If Zuckerberg is worried, then the public should be as well.

Why then are the largest firms in the world, the ones investing the most in artificial intelligence, and the ones devoting a large part of their marketing activity to highlighting their work on ethics facing this backlash? One might expect that there would be a correlation between the amount spent on ethics and the level of public confidence? Theoretically, Facebook should be one of the most trusted in the Big Tech community and not a laggard?

The reason for this surprising outcome as argued in the recent “Ethical by Design” whitepaper is that dialog around ethics in these firms misses the mark on a number of fronts. First, their understanding of ethics tends to conflate two connected but separate aspects of technology governance while ignoring genuine ethical thinking. Technologists feel most comfortable with standards and process. They tend to seek engineering solutions to problems. This is why most of the ethics conversation is confined to a discussion on “transparency,” “explainability,” and “bias” mitigation. These are critical concerns, but conveniently are also those with technological solutions. The second confusion is that between ethics and regulatory and legal issues. The General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have done a lot to raise awareness that perhaps data analytics needs some reigning in—but have also lulled many commentators on the subject into feeling that the extent of the ethics debate and responsibility for its management lies with regulators and not a wider community.

Instead, ethics needs to be recognised as the activity of intersubjective conversations. It must be determined through social interaction, and given the scale of digital technologies, it needs, like them, to happen at scale. It is important to underline the nature of it as an activity, as only in being an activity can the quality of it be measured. Ethics is therefore like cooking, and not cuisine; like playing, and not sport; like singing, and not music. Thought of in this way, organisations can separate the governance of ethics from the substantive principles that stem from the activity.

A second issue is that the extent to which actual ethics questions are asked, they focus on the long-term and the hypothetical challenges posed by ideas such as “AGI” or “superintelligence.” Nearer-term or immediate issues such as the gender diversity of chatbots and voice assistants, the meaningfulness of human labour in the context of automation, and the limits of acceptable application of facial recognition technology is often ignored until too late.

In consequence, organisations are blindsided by ethics breaches. With a misconception of what the practices of ethics are, and how to manage them, companies are fighting a losing battle against disgruntled employees who have no means to raise concerns, and the increase in consumer activists who highlight the dangers of unbridled enthusiasm for technology. More important, the major ethical questions raised by emerging technologies fail to be addressed.

It is useful to think of the problem of digital ethics as analogous to that of environmental sustainability. The tools that the investment management industry have developed to nudge towards best practices can be extended to achieve the same result around the application of technology and its impact on society. First, ethics needs to be recognised as an activity that is separate from risk/safety concerns such as “explainability” and “transparency” of AI models as well as the regulatory compliance conversation as exemplified by GDPR and the CCPA. Second, organisations need to be more specific about how they intend to manage the ethics debate with their stakeholders. The terms “ethics boards” and “ethics councils” are used interchangeably and without much thought. These need to be specifically defined and used as an industry standard.

Of greater concern, is that Big Tech is following the example of the approach used by the tobacco industry of the ‘50s: meeting behind closed doors to discuss the potential impact on society of their products and services while marketing a very different message to consumers to reassure them that they mean no harm. While there are important reasons for some specific closed-door sessions, this approach lends itself to the abuses inflicted by the tobacco industry.

Finally, organizations would benefit from the use of an ESG model to assess their own evolution towards digital ethics best practices. Further benefit would ensue if such a model could be adopted by the investment management industry in order to help investors and consumers discern between organisations that align to their values.

Rating organisations based on the substantive nature of their “ethics principles” is an intractable problem—arguing about whether Apple’s or Samsung’s ethics principles are better is akin to arguing as to which has the better logo. What is useful however is to assess how well an organisation has developed its handling of ethics issues across its constituent stakeholders. The additional advantage here is that one can make a fair comparison between two organisations that ostensibly hold the same values or principles. This way, it could genuinely be determined whether one company was more ethical than another.

Today it is impossible to say whether Alibaba is more ethical than Amazon, or Tencent preferable to Twitter or how Huawei compares to HPE. Yet as Chinese firms compete directly in the West, and greater scrutiny is made of the reach of Silicon Valley into our lives – this is exactly what is needed. Discussion on ethics ought no longer be solely the preserve of philosophers, and it is a category mistake to outsource the challenges of technology to technologists. In the debate about what future we want to design for ourselves, it is a conversation we all need to share in.

Read “Ethical by Design: Measuring and Managing Digital Ethics in the Enterprise” by Charles Radclyffe and Richard Nodell.