Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Survey: Machine Learning Projects Still Routinely Fail to Deploy
 Originally published in KDnuggets. Eric Siegel highlights the chronic...
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
SHARE THIS:

10 years ago
Tag management: Emerging areas for predictive analytics

 

An exponential increase in data volume only supports an improvement in business practices when the right analytical processes are deployed. Today, even managing real-time data and having a firm grasp on current trends isn’t enough. Without predictive analytics, businesses are still stuck in reactive mode. Organizations don’t just need to know what users are doing right this minute; they need to be able to accurately forecast the probable course of future trends with the help of tag management. At the bare minimum, they need to get closer to that mark than the competition. With so much at stake, it’s no surprise that the Predictive Analytics World conference drew quite a crowd to San Francisco in March of 2014.

At the event, TheServerSide found time to speak with two consulting firms about what clients want and what they need to make analytics work in the world of big data. Bryan Bell, VP of enterprise solutions at Expert System, and James Niehaus, VP of analytics and digital strategy at Ensighten, shared what’s hot in the predictive analytics space. Their perspectives provided a glimpse at what the next few years hold for this highly volatile arena.

Everyone wants tag technology — right now!

Tag management was the term on everyone’s lips at the conference. Real-time data — if used correctly — is the magic elixir that can boost customer satisfaction, increase sales, and lower risk in interactions. Consumers in an online interaction can be guided to the right inventory items or offered the most tempting add-ons based on their current behavior. In phone transactions, representatives can make better guesses about how to serve customers and enhance revenue.

But collecting data, sifting through it for relevant content, and mapping this data on to the appropriate engine for processing, is difficult to do in the right sequence in a speedy fashion. Analyzing the data before all the critical pieces are in place yields useless results. Waiting too long can mean a missed opportunity. With tagging, data can be syndicated in real time without consuming a huge amount of computing resources.

Why isn’t tagging happening more often?

According to Niehaus, the issue is one of limited resources and competing priorities. “At the end of the day, marketing has a set of agenda items around how to update their site to provide a better and more relevant experience to their end users. To do that involves code.” This means marketing can’t move forward without help from IT. Obviously, they would love for IT to bump tagging activities to the top of the to-do list. The increase in sales and customer satisfaction might be immediate and easy to measure, giving marketing a big boost in performance.

However, IT has the responsibility to manage resource capacity for the full enterprise and feels the pressure from all sides. Even something apparently simple like adding a single line of tagging code to a page might mean jumping through lots of hoops — especially if multiple stakeholders must approve the change. Each layer of procedural complexity is one more barrier in the way of implementation. As a result, solutions in this space focus on creating an IT-friendly way to add code fast, while still abiding by enterprise protocols.

The challenge of analyzing vague data

Language is, by nature, highly ambiguous. English, with its loose rules and history of incorporating terms from many other languages, is even more confusing than most. This means unstructured text data is incredibly difficult to analyze effectively. In the scenarios encountered by a typical business, social media interactions, customer service calls and other text-heavy sources of data cannot be mined until they are disambiguated. Otherwise, much of the most important information is lost in the chatter.

Disambiguation can be done using powerful semantic networks that assign meaning to ambiguous terms by examining the context. For example, the word “bear” can have more than one meaning. However, if a person says a bear chased him through the woods, listeners immediately know he isn’t talking about the right to bear arms (unless that firearm is loaded for bear).

What happens after the meaning is clear?

The next step is to group data in ways that make sense for the business. Bell shared the basics of the process. “Once you have an engine in place as a part of your workflow and you have the ability to add in the contextually relevant terminology, you can now take that and cluster content around core concepts or categorize content around specific ideas that are known to the enterprise.”

There is often a hidden benefit, as well. It’s possible for an organization to find information that they didn’t even know was in their data. Now, it is being tagged, organized and structured for easy exploration. Armed with this deeper understanding of the consumer, businesses can make more educated guesses and smarter decisions.

By Jason Tee
Originally published at www.theserverside.com

Leave a Reply