Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
AI Success Depends On How You Choose This One Number
 Originally published in Forbes, March 25, 2024. To do...
Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype
 Originally published in Forbes, April 10, 2024 When OpenAI’s...
Survey: Machine Learning Projects Still Routinely Fail to Deploy
 Originally published in KDnuggets. Eric Siegel highlights the chronic...
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
SHARE THIS:

3 years ago
Explainable Machine Learning, Model Transparency, and the Right to Explanation

 

Check out this topical video from Predictive Analytics World founder Eric Siegel:

A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. The video above covers explainable machine learning and the loudly-advocated machine learning standards transparency and the right to explanation. Eric discusses why these standards generally are not met and overviews the policy hurdles and technical challenges that are holding us back.

About the Author

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the Predictive Analytics World and Deep Learning World conference series, which have served more than 17,000 attendees since 2009, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery”, a popular speaker who’s been commissioned for more than 110 keynote addresses, and executive editor of The Machine Learning Times. He authored the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at more than 35 universities, and he won teaching awards when he was a professor at Columbia University, where he sang educational songs to his students. Eric also publishes op-eds on analytics and social justice. Follow him at @predictanalytic.

4 thoughts on “Explainable Machine Learning, Model Transparency, and the Right to Explanation

  1. Pingback: Explainable Machine Learning, Model Transparency, and the Right to Explanation « Machine Learning Times – NikolaNews

  2. The Mazda CX-9 is generally well-regarded for its stylish design, smooth handling, and spacious interior, making it a popular choice in the midsize SUV market. However, like any vehicle, it is not without its potential issues. Some owners have reported concerns with the infotainment system and occasional glitches, though software updates may address these issues. Additionally, there have been isolated reports of transmission problems, but these seem to be relatively uncommon. As with any car, regular maintenance and timely servicing can help mitigate potential issues. Overall, the Mazda CX-9 has received positive reviews for its performance and features, but it’s always recommended to research the specific model year and consult consumer reviews for the most up-to-date information on any potential issues.
    https://www.autonationx.com/mazda-cx-9-years-to-avoid/

     

Leave a Reply