Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
Today’s AI Won’t Radically Transform Society, But It’s Already Reshaping Business
 Originally published in Fast Company, Jan 5, 2024. Eric...
Calculating Customer Potential with Share of Wallet
 No question about it: We, as consumers have our...
A University Curriculum Supplement to Teach a Business Framework for ML Deployment
    In 2023, as a visiting analytics professor...
SHARE THIS:

4 years ago
Who Is Responsible When Autonomous Systems Fail?

 
Originally published in CIGIonline.org, June 15, 2020

Elaine Herzberg was killed on the night of March 18, 2018, after she was struck by a self-driving Uber car in Tempe, Arizona. Herzberg was crossing the street with her bike when the vehicle, which was operating in its autonomous mode, failed to accurately classify her moving body as an object to be avoided. Rafaela Vasquez, a backup safety driver who was tasked with monitoring the self-driving car, did not see Herzberg crossing the street. Following the accident and Herzberg’s death, Uber resumed testing its vehicles on public roads nine months later, and subsequently has been cleared of all criminal wrongdoing. More than two years later, Vasquez, the safety driver of the purported autonomous vehicle, continues to face the prospect of vehicular manslaughter charges.

As more autonomous and artificial intelligence (AI) systems operate in our world, the need to address issues of responsibility and accountability has become clear. However, if the outcome of the Uber self-driving accident is a harbinger of what lies ahead, there is cause for concern. Is it an appropriate allocation of responsibility for Rafaela Vasquez alone — and neither Uber, the actor who developed and deployed the technology, nor the state of Arizona, which allowed the testing to be conducted in the first place — to be held accountable?

Both dynamics, human as backup and human as overseer, co-exist within a long history of automation that consistently overestimates the capacities and capabilities of what a machine can do.

Notably, Vasquez was the “human in the loop,” whose role as backup driver was to ensure the safe functioning of the system, which, while autonomous, was not necessarily accurate 100 percent of the time. Such a role is increasingly common, in which humans are required to “smooth over the rough edges” of automated technologies. Scholars continue to document the myriad forms of human labour, from media platforms to online delivery services, that are required to keep intelligent systems operating “intelligently.”

To continue reading this article click here.