Predictive Analytics Times
Predictive Analytics Times
EXCLUSIVE HIGHLIGHTS
Prediction in the Public Sector: Why the Government Need Predictive Analytics
 Originally published by Analytics Magazine This...
Analytics in the Brave New Customer Experience World
 Mobile marketing technology offers opportunities to...
Wise Practitioner – Predictive Analytics Interview Series: Tauseef Rahman at Mercer
 In anticipation of his upcoming conference...
Why Your Analytics Must Ask the Data “Good” Questions — Ones that Reduce Data
 The problem of monetizing Big Data,...
SHARE THIS:

7 months ago
Deep Learning vs. Machine Learning: A Data Scientist’s Perspective

 

By: Rajendra

Originally published in Houseofbots.com

As artificial intelligence (AI) works its way into mainstream business practices, various different applications are coming up in conversations about how to best leverage the technology. In observing these conversations, I notice some writers using the terms machine learning (ML) and deep learning (DL) interchangeably. The two are actually different concepts in terms of the business problems they solve and the resources they require, and confusing them could lead to unwanted – and costly – results. Let’s take a moment to set the record straight.

When we see AI making headlines – for things like Apple using facial recognition for iPhone security or the fabricated videos that mimic President Obama’s speech patterns – those applications usually fall into the category of deep learning. DL has actually been around for decades, but only in the last few years has it become computationally feasible on a large enough scale to make it an effective option.

Deep learning is considered a subset of machine learning as a whole, an approach to AI that enables applications to more accurately predict outcomes without being specifically programmed. A good example of ML at work is your email spam filter. Behind the filter is an algorithm that continuously “learns” about red flags that indicate possible spam or phishing messages. As a result, most apps are able to reduce spam to 1-3 percent of all emails received. About 15 years ago, spam filters started shifting from a rules-based system (e.g. “Move emails from Nigerian Princes into the spam folder.”) to machine learning-based filters. A simple Bayesian ML algorithm could learn from a large “spam” training set in which words, headlines, and IP addresses were most likely to indicate that an email was spam.

For differentiation purposes, I’ll refer to simple ML algorithms that have been commercially feasible for the past 15-20 years as “classic machine learning.” These comprise a set of machine learning algorithms that a data scientist can run on a small data set with relative ease to create predictions and forecasts, cluster, detect outliers, and more.

CONTINUE READING: Access the complete article in Wired.com, where it was originally published.

About the Author:

I write columns on news related to bots, specifically in the categories of Artificial Intelligence, bot startup, bot funding.  I am also interested in recent developments in the fields of data science, machine learning and natural language processing

Leave a Reply