A growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI models.
Rumman Chowdhury’s job used to involve a lot of translation. As the “responsible AI” lead at the consulting firm Accenture, she would work with clients struggling to understand their AI models. How did they know if the models were doing what they were supposed to? The confusion often came about partly because the company’s data scientists, lawyers, and executives seemed to be speaking different languages. Her team would act as the go-between so that all parties could get on the same page. It was inefficient, to say the least: auditing a single model could take months.
So in late 2020, Chowdhury left her post to start her own venture. Called Parity AI, it offers clients a set of tools that seek to shrink the process down to a few weeks. It first helps them identify how they want to audit their model—is it for bias or for legal compliance?—and then provides recommendations for tackling the issue.
Parity is among a growing crop of startups promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms. Initially most of their clients came from heavily regulated industries like finance and health care. But increased research and media attention on issues of bias, privacy, and transparency have shifted the focus of the conversation. New clients are often simply worried about being responsible, while others want to “future proof” themselves in anticipation of regulation.
To continue reading this article, click here.