AI ethics: The view from the private sector

September 3, 2019
Building robust and comprehensive responses to the ethical challenges, real and hypothetical, raised by AI is far from straightforward.

This is an extract from “The ethics of AI“, the fourth report of the four-part series, “Asia’s AI agenda”, by MIT Technology Review Insights.

AI is not yet replacing workers outright

While it is clear that AI is assisting companies in making routine tasks more efficient, the link between job losses and AI is still opaque. Moreover, should we not grasp at opportunities to pass off low-value work so that humans can focus on higher-order cognitive tasks? One executive interviewed for this report asked “How ethical is it to keep people doing menial tasks when they could actually be taking a step up, when computers can do this for them?”

It depends on how we think about the workforce as a whole, argues Loredana Padurean at the Asia School of Business, rather than a simple question of whether certain jobs will come or go. “If we do it well, AI can improve our life dramatically, because we remove people from working in toxic or demeaning environments. The question is, what do we do with these people? Can we create societies that share the spoils of victory, and find better places for these people in society?”

Survey participants concur that unemployment hype is over-baked; only 34% viewed reducing labor costs as an important business driver for deploying AI. Improving decision-making speed and quality, in contrast, was voted a top driver by a far higher share of 51%.

There is no “global”

It is intuitive to discuss AI ethics at a global level, but many research studies have shown that there are major cultural differences in what could be considered the “right” way to articulate ethics. “Ethics is linked with morality and culture, which differs from region to region and culture to culture, and even within a single country,” says Zee Kin Yeong at IMDA. Cultural factors shape how citizens view the penetration of AI into realms like the home. ‘Care-bots’ for providing home-based health care and social services have taken root in Japan, where robots are ascribed human qualities. “People [in Japan] think about AI not as a computing machine, but more like a human being,” says Professor Kenji Suzuki, Center for Cybernics Research and Faculty of Engineering at the University of Tsukuba. “Different attitudes to AI lead to different approaches to developing it within society.”

The infamous Moral Machine trolley experiment posits that a car’s brakes fail, leaving it to either stay on course and kill three elderly people—two men and a woman—who disobeyed a “do not cross” signal, or swerve and kill its three passengers: an adult man, a woman, and a boy. The Moral Machine project analyzed 40m decisions from 233 countries to reveal that respondents from collectivist cultures, like China and Japan, were less likely to spare the young over the old. The element of risk-taking, that some pedestrians were breaking the rules, also played a part in the decision-making of the respondents in some countries.

Unintended consequences

A third critique is that of unintended consequences from proposed regulatory responses. Algorithmic transparency, for instance, seeks to improve accountability, transparency, and fairness by mandating the sharing of information about the functioning of algorithms responsible for automated decisions. It is already outlined in the European Union’s General Data Protection Regulation and elsewhere, including France’s Digital Republic Bill. Yet some tech companies and technology industry groups warn that such approaches could, by revealing how algorithms work, enabling hacking, the ”gaming” of algorithmic systems, and even intellectual property theft. It is also unclear how the working of complex algorithms can be translated into language that the unskilled can meaningfully understand.

Rather than developing new AI regulations or codes, it might be more practical and achievable to ensure that AI does not transgress existing civil rights. Some legal scholars argue that AI codes are ambiguous and lack accountability. More effective would be to govern AI according to existing institutions, like the international human bill of rights; if an AI removes a person’s rights, it should not be acceptable.

“We are very far still from really building or knowing how to build intelligent machines.”

Tomaso Poggio, Professor, Department of Brain and Cognitive Sciences, MIT

While some AI innovations might be genuinely new (such as autonomous vehicles), experts believe we are still very far from the dawn of general AI, which would genuinely challenge our existing rules. “We are very far still from really building or knowing how to build intelligent machines—intelligent in a sense of not just better at playing chess, or golf, or driving a car,” says Tomaso Poggio at MIT. “This is the greatest problem in science, and could be up to 50 years away.” There are many other “more pressing and immediate risks to mankind, from global warming to existing nuclear weapons.”