Predictive Analytics World for Industry 4.0 Las Vegas 2019
June 16-20, 2019 – Caesars Palace, Las Vegas
This page shows the agenda for PAW Industry 4.0. Click here to view the full 7-track agenda for the five co-located conferences at Mega-PAW (PAW Business, PAW Financial, PAW Healthcare, PAW Industry 4.0, and Deep Learning World).
Blue circle sessions are for All Levels
Red triangle sessions are Expert/Practitioner Level
Pre-Conference Workshops - Sunday, June 16th, 2019
Full-day: 8:30am – 4:30pm
This one day workshop reviews major big data success stories that have transformed businesses and created new markets. Click workshop title above for the fully detailed description.
Two and a half hour afternoon workshop:
This 2.5 hour workshop launches your tenure as a user of R, the well-known open-source platform for data analysis. Click workshop title above for the fully detailed description.
Pre-Conference Workshops - Monday, June 17th, 2019
Full-day: 8:30am – 4:30pm:
This one-day session surveys standard and advanced methods for predictive modeling (aka machine learning). Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm:
Gain experience driving R for predictive modeling across real examples and data sets. Survey the pertinent modeling packages. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm:
This workshop dives into the key ensemble approaches, including Bagging, Random Forests, and Stochastic Gradient Boosting. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm:
This one-day introductory workshop dives deep. You will explore deep neural classification, LSTM time series analysis, convolutional image classification, advanced data clustering, bandit algorithms, and reinforcement learning. Click workshop title above for the fully detailed description.
Day 1 - Tuesday, June 18th, 2019
A veteran applying deep learning at the likes of Apple, Samsung, Bosch, GE, and Stanford, Mohammad Shokoohi-Yekta kicks off Mega-PAW 2019 by addressing these Big Questions about deep learning and where it's headed
- Late-breaking developments applying deep learning in retail, financial services, healthcare, IoT, and autonomous and semi-autonomous vehicles
- Why time series data is The New Big Data and how deep learning leverages this booming, fundamental source of data
- What's coming next and whether deep learning is destined to replace traditional machine learning methods and render them outdated
In the United States, between 1500 and 3000 infants and children die due to abuse and neglect each year. Children age 0-3 years are at the greatest risk. The children who survive abuse, neglect and chronic adversity in early childhood often suffer a lifetime of well-documented physical, mental, educational, and social health problems. The cost of child maltreatment to American society is estimated at $124 - 585 billion annually.
A distinctive characteristic of the infants and young children most vulnerable to maltreatment is their lack of visibility to the professionals. Indeed, approximately half of infants and children who die from child maltreatment are not known to child protection agencies before their deaths occur.
Early detection and intervention may reduce the severity and frequency of outcomes associated with child maltreatment, including death.
In this talk, Dr. Daley will discuss the work of the nonprofit, Predict-Align-Prevent, which implements geospatial machine learning to predict the location of child maltreatment events, strategic planning to optimize the spatial allocation of prevention resources, and longitudinal measurements of population health and safety metrics to determine the effectiveness of prevention programming. Her goal is to discover the combination of prevention services, supports, and infrastructure that reliably prevents child abuse and neglect.
The research on the state of Big Data and Data Science can be truly alarming. According to a 2019 NewVantage survey, 77% of businesses report that "business adoption” of big data and AI initiatives are a challenge. A 2019 Gartner report showed that 80% of AI projects will “remain alchemy, run by wizards” through 2020. Gartner also said in 2018 that nearly 85% of big data projects fail. With all these reports of failure, how can a business truly gain insights from big data? How can you ensure your investment in data science and predictive analytics will yield a return? Join Dr. Ryohei Fujimaki, CEO and Founder of data science automation leader dotData, to see how Automation is set to change the world of data science and big data. In this keynote session, Dr. Fujimaki will discuss the impact of Artificial Intelligence and Machine Learning on the field of data science automation. Learn about the four pillars of data science automation: Acceleration, Democratization, Augmentation and Operationalization, and how you can leverage these to create impactful data science projects that yield results for your business units and provide measurable value from your data science investment.
AI is framed by models, sensors and technologies. These often ignore the human who must deal with and trust AI outputs. How do we translate the mental models and senses that humans deploy daily into algorithms that take us from data to inference to action? With the explosion of sensors at the edge, how do we actually make sense at the edge? This presentation draws from a recent Intel study of over 250 people in manufacturing and its supporting ecosystem to explore what it takes to accelerate the adoption of Industry 4.0 in a systems of systems approach.
Many organizations are faced with the challenge of how to analyze their sensitive data without hosting it on any public cloud. This talk will focus on companies who collect data from their factory operations and are interested in predicting mechanical failures. The audience will get an overview of how to formulate their business problem, perform feature engineering and build a predictive maintenance model using R/Python.
Field issue (malfunction) incidents are costly for the manufacturer’s service department. A normal telematics system has difficulty in capturing useful information even with pre-set triggers. In this session, Yong Sun will discuss how a machine learning, deep learning based predictive software/hardware system has been implemented to solve these challenges by 1) identifying when a fault will happen 2) diagnosing the root cause on the spot based on time series data analysis. Yong Sun will cover a novel technique for addressing a lack of training data for the neural network based root cause analysis.
More than a single idea or technology; Industry 4.0 is a confluence of many old and new concepts working in tandem to unleash new sources of value. Going beyond the hype and ambiguity to truly understand what it really means for a system to have cognitive capability, listeners will be empowered with concrete ideas for transitioning from basic automated systems to cognitive systems
We will discuss the approach used to learn topics and their place in a multi level hierarchy on hundreds of millions of text records. These methods are generalizable beyond the domain in which they were applied. We used a combination of supervised and unsupervised machine learning methods, which we will discuss at more length including the technologies, algorithms, and results.
3:55 pm - 4:15 pm
Industry 4.0 can suffer from a real-world application problem when industrial and manufacturing companies only view IoT as a solution during large-scale plant upgrades or new construction. This case study presents how a manufacturing company has been able to generate energy cost savings in the scale of $MM through targeted deployment of sensors and IoT connected equipment into existing, large, (and sometimes very old and dirty) machinery and factories. By lowering the threshold of what projects are considered worthy of an IoT investment, what was previously considered run-of-the-mill operations can suddenly provide insightful information and an exciting ROI.
4:20 pm - 4:40 pm
Plants in the electric utility sector face common operational challenges. They want to optimize output, lower operating costs, maintain reliability, and ensure safety-- and they need to meet all of these goals simultaneously.
In this session we'll focus on the opportunities for deploying machine learning and data science methods in a plant setting. The presentation will cover use cases, big data tools, and the implications for plant operation optimization.
- Machine learning for the plant: when useful, when not
- How the proliferation of tailored sensors and IOT are changing operations
- Importance of explainable AI
- Checklist for applying data science
4:45 pm - 5:05 pm
Optoro’s three core data culture problems were the following:
• Fear of Data
• Inconsistent Use of Vocabulary and Metrics
• Data Mistrust
This presentation will outline the strategies that we used to combat these issues. This on-going endeavor is yielding benefits to Optoro, including
• Increased alignment on company goals
• Improved ease of communication between teams and with Senior Management
• Consistency across external messaging
The session helps participants understand the role of data, the importance of a data strategy in an organization, the types of business analytics to execute, and ten practical steps to develop a data strategy to improve business processes. We will explore how to conduct research to identify opportunities to minimize threats, manage risks, and improve performance.
The session will use the dataFonomics® (A Data to Information Economics Framework) methodology as a platform for developing a Data Strategy. Attendees will be given an overview of the framework, steps, practical knowledge on data analysis and how to develop a data strategy.
Day 2 - Wednesday, June 19th, 2019
Turning data into a business advantage through optimization is the goal of most organizations. UPS has been on a twenty year journey to achieve this goal and has seen cost improvements reaching $1B annually. At the same time, UPS has been able to offer new products and services backed by data and analytics.
In today’s digital age, users expect a fast, reliable mobile experience. Degradations (also referred to as regressions) in mobile app performance affect not only user experience, but even hurt business metrics. However, existing mobile app release pipelines lack the necessary infrastructure to detect regressions in a mobile app's performance before it is rolled out to the world. At Uber, we are building a state-of-the-art mobile regression detection pipeline, with the goal to detect regressions as small as 1%. Our approach includes both technological innovation as well as employing machine learning along with statistical testing techniques to improve the sensitivity of the regression experiments.
At GM we are committed to have a world with zero crashes, zero emissions and zero congestion. The causes of crashes are directly correlated to human intelligence, that is, learning from experience, adapting to new situations, and using knowledge to prevent accidents.
Accident Detection and Avoidance Systems (ADAS) are a big step taken by car manufacturers to prevent the accidents. The effects of ADAS on the human attention and perception differ depending on the road conditions as well as areas (i.e., rural vs urban roads).
In a pair of case studies, we examine ADAS systems and insurance underwriting risk.
The core Bayesian idea, when learning from data, is to inject information — however slight — from outside the data. In real-world applications, meta-information is clearly needed — such as domain knowledge about the problem being addressed, what to optimize, what variables mean, their valid ranges, etc. But even when estimating basic features (such as rates of rare events), even vague prior information can be very valuable. This key idea has been re-discovered in many fields, from the James-Stein estimator in mathematics and Ridge or Lasso Regression in machine learning, to Shrinkage in bio-statistics and “Optimal Brain Surgery” in neural networks. It’s so effective — as I’ll illustrate for a simple technique useful for wide data, such as in text mining — that the Bayesian tribe has grown from being the oppressed minority to where we just may all be Bayesians now.
Altair Knowledge Works enables individuals and organizations to incorporate more data, unite more minds and engender more trust in analytics and data science. This solution helps organizations get more from internal, external, and enterprise-wide sources of data. Knowledge Works makes more data usable, maximizing the breadth, integrity, value, and insight from your analytics, no matter the origin, format or narrative of the data. Eliminate self-service siloes, duplications, and versioning errors that create dubious analytics. Knowledge Works is a secure, unified data management platform that ensures data integrity, user lineage, and control. The result is greater confidence, bolder insights, and smarter outcomes. It’s a platform for teams with different skill sets enabling them to combine heterogeneous strengths to collaborate on the machine learning, predictive models, and automated decision making with their peers with precision, efficiency and agility. From data wrangling to intelligence, creation of models and model results visualization.
The companies getting the most value from advanced analytics spend much more of their time and money embedding analytics into their core workflows than others. The most successful, in fact, spend more than half their analytics budget not to build analytics, but to deploy and operationalize it. Companies that don’t complete this last mile, those that stop once they have completed the core analytics, see their analytic investments go to waste. Join this expert panel to hear what you can do to make sure you can embed analytics in your front line and maximize the return on your analytics investment.
Vistra Energy is one of the largest energy companies in the United States, owning both power generation and retail operations in extremely competitive markets throughout the country. We combine the big data opportunities we have available as a utility with advanced analytics to provide premium customer services that differentiate us from other utility providers. We will present three case studies demonstrating how we are able to leverage our firehose of 15-minute interval IOT device energy usage data from over 1.5 million customers with advanced modeling techniques to provide added value to our customers and increase brand loyalty.
Acronyms abound in the area of predictive analytics and machine learning is no exception. The discipline of predictive analytics has been used by businesses since the end of World War II. But machine learning has been at the core of this activity since its very early business applications. In these very early days, the "machine" itself and not the human was identifying the predictive algorithms that would optimize a given business solution albeit in a more simplistic manner. With the advent of Big Data, this concept of machine learning has now expanded to more complex forms
Post-Conference Workshops - Thursday, June 20th, 2019
Full-day: 8:30am – 4:30pm:
This one-day session reveals the subtle mistakes analytics practitioners often make when facing a new challenge (the “deadly dozen”), and clearly explains the advanced methods seasoned experts use to avoid those pitfalls and build accurate and reliable models. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm:
Gain the power to extract signals from big data on your own, without relying on data engineers and Hadoop specialists. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm:
During this workshop, you will gain hands-on experience deploying deep learning on Google’s TPUs (Tensor Processing Units) – held the day immediately after the Deep Learning World and Predictive Analytics World two-day conferences. Click workshop title above for the fully detailed description.