Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
SHARE THIS:

This excerpt is from SmartDataCollective. To view the whole article click here

8 years ago
Do Predictive Workforce Analytics Actually Work?

 

Recently HR industry expert and father of the HR Tech conference, Bill Kutik, wrote about the hype around predictive analytics. In his article, he quotes Constellation Research analyst Holger Mueller saying, “It all comes down to whether the models ‘really work’ when applied across a variety of customers with very different data landscapes.” See Holger’s plenary session at Predictive Analytics World for Workforce: April 3-6.

So while many HR software vendors talk the talk of predicting “at risk employees,” how many can prove they walk the walk, and that their predictions actually work? How can you ensure a vendor’s claim to predict employee retention risks is valid? What should you look for?

First, why does predicting “risk of exit” even matter?

Since the peak of the recession in 2009, the number of unemployed persons per job opening in the US has steadily declined, and is now back at pre-recession levels. On top of that, Bureau of Labor Statistics data shows that it is harder and harder for companies not only to hire, but also to retain talent.

As a result, retention is a key objective for most HR organizations, understandably. In an attempt to quantify the impact of attrition, many have tried to connect turnover to business impact. In one of the most comprehensive studies that analyzed data from 48 separate studies, turnover was shown to have real impact on financial results, customer service, labor productivity, and safety outcomes.

Many more have tried to quantify the impact of turnover by estimating the direct and indirect costs. While many opinions have been shared, the research results on the costs associated with attrition remain varied, largely because the roles and the factors considered also vary. A full accounting needs to extend beyond hiring and training to include separation, productivity, and lost knowledge.

At a company with 5,000 exempt employees (e.g. such as administrative, executive, professional employees, computer professionals, and sales employees) with a voluntary turnover rate of 10 percent (more than one percent lower than the average rate across industries in 2014), even conservative estimates can convert unwanted turnover into more than $30 million in replacement costs in a single year.

The bottom line can be hit further by spending on well intended, but misapplied, retention tactics, such as raises, bonuses, and/or promotions — put in place by HR and/or managers in an attempt to prevent resignations. When these tactics are applied without hard data to back them up, their results can be limited. Worse, money can be spent needlessly to retain people who are not actually at risk of leaving.

As described in this article, ConAgra Foods Builds HR Analytics Program, using data to guide the implementation of retention strategies provides “the difference between carpet bombing and using laser-guided missiles.” Rather than applying retention programs across the organization, HR can sharpen its focus and apply dollars where they will have the most impact. If you can leverage predictive analytics to correctly identify employees who are at risk of leaving — in particular, top performers and people in key roles — you can avoid these costs, while also enabling productivity and performance gains. In this case, the keyword is correctly.

Why is proving that predictive analytics work so hard?

First, with any predictive model, you need to have a means to validate that your predictions are valid. Visier’s Data Scientists have identified that a minimum of 2-3 years of data is required for the analysis to be valid (the more the better). It’s like the statement most parents have made to their kids at some point, “How do you know you don’t like it, if you haven’t tried it?” Or, in our case, how do you know the predictions are working, if you haven’t made a prediction that can be validated against real outcomes?

Secondly, the patterns behind why people make decisions cannot be boiled down to simple factors – marketers have been trying to figure that out for years. It is “data with feelings,” and to find the patterns inherent in such data requires looking across as many varied sources of information as possible. Like mining for gold, the wider your search the more likely you are going to find the hidden nugget – of insight in the case of predictive analytics.

Thirdly, the accuracy of the predictions depends on the data used to create the model. For instance, if a model is created based on the factors inherent at one company, it doesn’t necessarily apply at a second company. Compounding this challenge, the same may be true about a model for one year compared to the next year within the same company. Approaches need to take this dynamic nature into mind.

The problem is that most “at risk” predictive analytics capabilities available today are in their infancy — they have simply not been used for long enough by enough companies for enough employees on enough sources of data.

Validating an “at risk” predictive analytics technology

At Visier, we wanted to put our own “at risk” predictive analytics to the test. To do so, we took all that we knew about predictive analytics and anonymously analyzed data within our cloud platform, applying our own “at risk” predictive analytics technology in the process. In doing so, we found that Visier is up to 8x more accurate at predicting who will resign over the next 3 months than guesswork or intuition — and up to 10x more accurate if you focus on the top 100 “at risk” employees.

By applying our Continuous Machine Learning to employees over a certain period of time, we are able to assign a “risk” score and rank them from highest to lowest. All of these calculations happen dynamically and instantly, so when an HR analyst, business partner, or leader asks which employees are “at risk” in a specific employee sub-group (for instance, specifying a role, location, tenure, and performance level), the system automatically provides the relevant results, based on the latest data applicable to the user.

With this information in hand, HR can take action to address the most vulnerable groups or prepare for certain turnover.

HR’s critical role in predictive analytics

Despite the hype, predictive analytics will not replace human intervention: they won’t tell you the one clear course of action to take, particularly when dealing with data that has feelings.

Predictive analytics is about more than who will leave; it’s about why they are leaving. In many ways, predicting why is more valuable than naming individuals, as it allows HR to develop thoughtful, refined, long-term programs to reduce resignation rates by targeting root causes.

Author Bio:

Dave WeisbeckA seasoned software executive, Dave Weisbeck’s experience ranges from building development teams to growing multi-billion dollar businesses as a General Manager. With twenty years in the information management and analytics industry, Dave’s prior roles include developing Crystal Decisions and Business Objects products and product strategy. Most recently Dave was the Senior Vice President and General Manager responsible for Business Intelligence, Enterprise Information Management, and Data Warehousing at SAP. Dave holds a position on the HR.com Advisory Board.

Leave a Reply