Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Guidebook to the Future of Data Science: How to Navigate the Increasingly Symbiotic Dynamic Between Executives and Universities
 Book Review of Closing the Analytics Talent Gap: An...
Guilty or Not Guilty: Weight of Evidence
 You have been invited to serve as a juror...
How Machine Learning Works for Social Good
  Originally published in KDnuggets, Nov 2020. This article...
Diversity and Collaborative Problem Solving to Address Wicked Data Ethics Problems
 The complexity of the ethical issues facing professionals who...
SHARE THIS:

4 weeks ago
Worried About Your Firm’s AI Ethics? These Startups Are Here to Help.

 
Originally published in MIT Technology Review, Jan 15, 2021.

A growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI models.

Rumman Chowdhury’s job used to involve a lot of translation. As the “responsible AI” lead at the consulting firm Accenture, she would work with clients struggling to understand their AI models. How did they know if the models were doing what they were supposed to? The confusion often came about partly because the company’s data scientists, lawyers, and executives seemed to be speaking different languages. Her team would act as the go-between so that all parties could get on the same page. It was inefficient, to say the least: auditing a single model could take months.

So in late 2020, Chowdhury left her post to start her own venture. Called Parity AI, it offers clients a set of tools that seek to shrink the process down to a few weeks. It first helps them identify how they want to audit their model—is it for bias or for legal compliance?—and then provides recommendations for tackling the issue.

Parity is among a growing crop of startups promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms. Initially most of their clients came from heavily regulated industries like finance and health care. But increased research and media attention on issues of bias, privacy, and transparency have shifted the focus of the conversation. New clients are often simply worried about being responsible, while others want to “future proof” themselves in anticipation of regulation.

To continue reading this article, click here.

Leave a Reply