Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more. But research has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes. For example, COVID-19 prediction models can help health systems combat the virus through efficient allocation of ICU beds, ventilators, and other resources. But as a recent study (link is external) in the Journal of the American Medical Informatics Association suggests, if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color.
The question, then, is how can we harness the benefits of AI without inadvertently introducing bias or other unfair outcomes? Fortunately, while the sophisticated technology may be new, the FTC’s attention to automated decision making is not. The FTC has decades of experience enforcing three laws important to developers and users of AI:
Among other things, the FTC has used its expertise with these laws to report on big data analytics and machine learning; to conduct a hearing on algorithms, AI and predictive analytics; and to issue business guidance on AI and algorithms. This work – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly, and equitably.
To continue reading this article, click here.