Managing Corporations’ Risk in Adopting Artificial Intelligence: A Corporate Responsibility Paradigm

Abstract

Accelerating developments are being observed in machine learning (ML) technology as the capacities for data capture and ever-increasing computer processing power have significantly improved. This is a branch of artificial intelligence technology that is not ‘deterministic,’ but rather one that programs the machine to ‘learn’ from patterns and data in order to arrive at outcomes, such as in predictive analytics. It is observed that companies are increasingly exploring the adoption of various ML technologies in various aspects of their business models, as successful adopters have seen marked revenue growth. ML raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being input for the learning processes for ML, the risks of bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, difficult choices, and even sub-optimal behavioural phenomena; and who should be accountable for such risks. While extant literature provides rich discussion of these issues, there are only emerging regulatory frameworks and soft law in the form of ethical principles to guide corporations navigating this area of innovation. This article intentionally focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are yet emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being developed in a number of bottom-up quarters. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.

Share

Authors

Iris H.-Y. Chiu (University College London)
Ernest W.K. Lim (National University of Singapore)

Download

Issue

Dates

Licence

All rights reserved

File Checksums (MD5)

  • 20.2.2: 83fab2ab04f0456fb14e0f276b96aac3