Predictive Analytics World for Business 2023
June 18-22, 2023 l Red Rock Casino Resort & Spa, Las Vegas
TRACK TOPICS – The three tracks of the main two-day conference cover these topics:
Analytics operationalization & leadership
Advanced ML methods & MLops
Cross-industry applications & workforce analytics
TOPICS – The sessions across this two-day, three-track conference are grouped into the following four topics:
Analytics operationalization & leadership
Advanced ML methods & Mlops
Cross-industry applications & workforce analytics
Most of Track 3
Blue circle sessions are for All Levels
Red triangle sessions are Expert/Practitioner Level
Workshops - Sunday, June 18th, 2023
Full-day: 8:30am – 4:30pm PDT
Gain experience driving R for predictive modeling across real examples and data sets. Survey the pertinent modeling packages.
Workshops - Monday, June 19th, 2023
Full-day: 8:30am – 4:30pm PDT
This one-day session surveys standard and advanced methods for predictive modeling (aka machine learning).
Full-day: 8:30am – 4:30pm PDT
Machine learning improves operations only when its predictive models are deployed, integrated and acted upon – that is, only when you operationalize it.
Full-day: 8:30am – 4:30pm PDT
This one-day introductory workshop dives deep. You will explore deep neural classification, LSTM time series analysis, convolutional image classification, advanced data clustering, bandit algorithms, and reinforcement learning.
Predictive Analytics World for Business - Las Vegas - Day 1 - Tuesday, June 20th, 2023
Join Kian Katanforoosh, CEO and Founder of Workera, as he explores the profound impact of generative AI on the workforce and the evolution of personalized learning. With his rich experience in AI education and having taught AI to over 4 million people with Prof. Andrew Ng as founding member of DeepLearning.AI, Kian's insights are uniquely informed and forward-thinking.
In this keynote, Kian will unravel how generative AI is reshaping learning, emphasizing the pivotal role of skills data in actualizing personalized learning. He will discuss the harnessing of this data to tailor learning experiences to individual needs, track progress, identify improvement areas, and improve workforce management.
Drawing from his experiences as a founding member of DeepLearning.AI and the co-creator of the popular Stanford Deep Learning Class, Kian will share his vision for a future where learning is as unique as we are. Attend this session for a deep dive into the convergence of AI, personalized learning, and workforce transformation.
Google continues to take a bold and responsible approach to developing and deploying AI through the company’s infrastructure, tools, products, and services. Google brought further AI breakthroughs into the real world through Google Cloud’s launch of the next wave of generative AI across core areas of their business, and new partnerships and programs grounded in Google’s commitment to an open AI ecosystem. At the same time, AI, as a still-emerging technology, poses complexities and risks; and the development and use of AI must address these risks in a structured, transparent, and accountable way. A robust governance structure – and rigorous testing and ethics reviews — is necessary to put responsible AI principles into practice. And with AI regulation coming soon, Jen will share learnings, challenges, and practical tips on how Google is maturing its responsible AI practices, processes, and tools in advance of greater regulatory, global standards, and consumer expectations.
In this session, Juan Acevedo, a machine learning architect at Google, will discuss how organizations can leverage Google Cloud's generative AI products to bring value to their businesses in a secure environment and responsibly. Juan will cover the following topics: What can you do right now with Google Cloud technology Responsible generative AI This session is for those who are interested in learning more about generative AI and how it can be used to improve their businesses.
ML and AI projects are technology-heavy, data-rich and rely on increasingly large teams with deep technical skills and expertise. As more companies make larger investments in ML and AI, the pressure to succeed in these complex projects is increasing. Still, many of these projects fail. They fail not because the technology fails, not because the data is poor or because the team lacks skill but because they were not set up for success. The way the project was conceived, framed and begun destined them for failure. In this session to kick off the business track, you'll learn how to set ML and AI projects up for success by getting that first step right. You'll see how to frame the problem correctly, learn why step 1 has to be business-led and understand why deployment and operationalization depend on getting step 1 right.
Predictive modelers love building the models and then comparing them to determine the best model to deliver to the stakeholder. For Regression, the common metrics are R^2, mean squared error, root mean squared error, or mean absolute error. For classification, we usually see the confusion matrix as the basis for accuracy: Precision/Recall, Specificity/Sensitivity, and percent correct classification. These all have their place in our toolbox.
However, in many projects, if not in the majority of products, the business doesn’t care about any of these. The model is intended to increase revenue or minimize churn. If the analyst uses a standard metric, that modeler may optimize the standard metric but miss out on better models for the business. In this talk, alternative metrics will be explored that improve the effectiveness of the models operationally for the business.
The cruise industry is an ideal crucible for enterprise applications of data science and machine learning. Its ships are veritable mobile cities on the water, powered by full-blown industrial operations. Pricing and revenue management are driven by complex stochastic optimizations, demand forecasting, and price-response predictions. Marketing & eCommerce activities target guests globally with mixes of message, promotions and recommendations. And a portfolio of hotels and global supply chain must be managed to provide millions of guests all that they need locally on the ship. This session will cover case studies for how we leverage math and data to make this work, and in particular how we restarted from 18 months of essentially zero data (aka the pandemic).
Organizations must continually reimagine products and services to stay competitive. Technological acceleration of AI and data makes innovation more complex, creating a need for continual upskilling and reskilling. In this talk, Kian Katanforoosh, CEO of Workera, explores how people are at the center of successful skills transformation. He’ll share how the best organizations undergoing an AI transformation are evolving, how their workforce is gaining skills, how they use skills data to power talent strategies, and what the future of work looks like for today’s enterprise.
While machine learning algorithms can produce valuable results, evaluating the performance of the resulting exercise can be a bit confusing - to the layman, that is. Often the recipient of the results will hear about Receiver Operator Characteristic curves or a KS statistic or a means squared error, or confusion matrices or some other foreign associated term. Frequently, managers and other users of predictive algorithms are not fluent in these concepts, and they might find themselves as confused as a confusion matrix might be to them. This session will focus on practical measures of performance evaluation designed for the non-quantitative executive, as well as the data scientist who needs to present the results of his efforts to less analytically inclined audiences. The end results are a simple scorecard measuring the ML algorithm performance.
Machine learning (ML) is a rapidly growing field that opens a lot of opportunities for transformation workflows for many users. This presentation will introduce how to enhance user productivity using Machine Learning and replace repetitive manual actions with intellectual ML suggestions. This talk will cover the following:
- Collection and labeling industry specific data
- Developing a universal ML workflow to process users data (image and text)
- Instrumentation and feedback collection that will help drive future improvements by using incremental learning techniques
To deliver world-class results while growing and retaining talent, the proper foundation of tools, practices, and leadership will be required. Brandon Southern has architected analytics environments for multiple organizations throughout his career and shows how and why it is important to incorporate standard software development, quality assurance, and project management practices into the analytics landscape.
In this keynote session, Brandon Southern discusses:
- Architecting the analytics environment using a proven framework
- Educating and enabling analysts with the proper tools and training
- Structuring teams with a focus on elevating and retaining talent
We illustrate a Living Digital Twin of a fleet of Electric Vehicles that gives actionable predictions of battery degradation over time. Since each vehicle takes a different route and has different charging and discharging cycles over its lifetime, the battery degradation for each vehicle will be different. We use a scalable predictive modeling framework deployed across a distributed computing architecture in the cloud to make individualized predictions for each EV battery in the fleet to reflect the degraded performance accurately. The neural network battery model is calibrated using the parameters that had the most significant impact on the output voltage through the Unscented Kalman Filter (UKF) method, which is a Bayesian technique for parameter estimation of non-linear system behavior. We simulated in-production real-world operations by having the vehicles “drive” the routes using synthetic datasets and showed how the calibrated model provide more accurate estimates of battery degradation. Using a Living Digital Twin to calculate the remaining range and battery State of Health addresses problems of range anxiety in the EV automotive industry and can drive the value of EVs in the market.
Despite the rapid evolution of AI, projects still fail at a disappointingly high rate. In the past, capturing data at scale and building models was the challenge, but today we're confronted with the issue of making AI more robust while avoiding the risk of unintended consequences. While the tools are new, many challenges remain the same.
In this talk, I will share by way of real-world examples improving business processes at a tier 1 trauma hospital that demonstrate:
- How to build the business case for an AI project (and get buy-in)
- Navigating AI project management to prevent failure
- How to mitigate the risks of unintended consequences from using AI
Product recommendation is at the heart of Personalization group’s efforts to help Albertsons customers. Deep Learning has become the go-to approach for recommendation. Therefore, the group has begun to put efforts into applying Deep Learning to enhance new product recommendations. First, leveraging transaction data and product catalog, we built Customer DNA and Product DNA models. Customer DNA model captures customer characteristics such as purchase behavioral pattern, dietary preference, e-com affinity, customer location, etc. and embeds into a list (vector) of numbers. Similarly, Product DNA model captures product characteristics (e.g., is product organic and/or sugar-free?) and product-product associations—e.g., bread and peanut butter are usually purchased together. Second, we leverage these models to build a next generation recommendation system—inspired by the Wide and Deep recommendation model architecture. Our experiments building the framework have generated favorable results and we will share our journey from model conception to putting it in production to better serve our customers.
Delivering 30K+ predictions every month means monitoring and managing 30K+ error metrics every month. In this presentation, Jodi Blomberg will illustrate how to go beyond MSE-type metrics in high volume forecasting to target the highest value errors in large scale predictions.
This talk provides a 3-dimensional framework for understanding professional competence and career progression. The framework covers primary skills (analytics), complementary skills (oft overlooked "soft skills"), and impact (how one's work advances the mission). Once defined, the speaker provides actionable guidance on how to develop a roadmap to achieve raises, bonuses, and promotions via a one-year plan. The content is optimized for early career professionals (<5 years experience) in need of guidance on how to move up and leaders of early career professionals in need of a framework to help cultivate their talent.
As John Deere continues to evolve as a technology company we must also look at bringing along existing systems and legacy data into this new world. With a fresh slate we can re-evaluate "the way we've always done it" and look to enhance the data experience for users of all skill levels. As we embrace cloud solutions for our data we can focus on things such as who should really govern data and why, how can we enable more users to load data that's important to them, and how can we reduce the barriers to entry for employees who are new to Data and Analytics and allow them to be successful.
This case study presents how Jetson Electric (ridejetson.com) adopted analytics to become a data driven decision maker based, better understanding its customers' mindsets and producing analytical use cases focused on operational efficiency.
In late 2021, Jetson was facing challenges in its after sales operations, especially in the brick-and-mortar retail sales channel. The company decided to try to use analytics to address the challenge. There were three pitfalls to pursue this agenda: Jetson's vision of analytics was technical, lacking business insights; the data lake was nonexistent; and the data was siloed and on the retailer's websites.
The Business Challenge drove the team and the technical evolution. The project analytically redesigned Jetson relationship with retailers regarding After Sales, focusing on total P&L, evaluating root cause analysis and better negotiating chargebacks and cashflow. The team combined techniques, using descriptive analytics, regressions and machine learning to structure the recommendation models. In parallel, the analytics tech operation was evolved by adding Databricks as the collaboration layer and structuring the data lake architecture and governance.
The results were impressive: double digit cost savings in after sales, a never-before achieved comprehension of end-customer mindsets, and data much more ready to use.
Rexer Analytics began surveying data scientists in 2007. This year's Data Science Survey is a collaboration with "Predictive Analytics" author and conference series founder Eric Siegel. In this session, Karl and Eric will present preliminary results from this year's survey. Topics will include algorithm choices, data science job satisfaction trends, deep learning, model deployment, and deployment challenges.
3:55 pm - 4:15 pm
At Northwestern we have developed a system that is built to consume and capitalize on IoT infrastructure by ingesting device data and employing modern machine learning approaches to infer the status of various components of the IoT system. It is built on several open source components with state-of-the-art artificial intelligence. We will discuss its distinguishing features of being Kubernetes native and by employing our work that enables features to be specified through a flexible logic which is propagated throughout the architectural components. We will discuss select use cases implemented in the platform and the underlying benefits. The audience will learn how to build or use a streaming solution based on Kubernetes and open source components.
4:20 pm - 4:40 pm
The foundations and best practices are pretty obvious when it comes to developing software projects. The toolbox is filled with well-known products and the methodologies are so straightforward that it's almost impossible to get lost when operating large-scale software projects. The ML world, however -- as much progress as it may have made -- is still lacking such best practices and standards when it comes to operating at scale.
ML engineers' daily operations consist of a few pain points and problems such as collaborating with a team of other ML engineers on the same project, deploying models to production at scale, or just managing the different components assembling the average ml project. Although the solutions are out there, they have yet to become broadly adopted or accepted best practices in the industry.
In this talk, we will break down the operational issues that the average ML engineer runs into on a daily basis. We will list the possible solutions and tools available today that can solve these challenges. Finally, we will list some ML operation methodologies that have the potential of becoming the next world standard when it comes to developing an ML project at scale.
3:55 pm - 4:15 pm
Leaders are starving for data-informed decision-making to increase revenue, reduce costs and optimize processes. However, they often face steep challenges spanning organizational structures, institutionalized processes, and a plethora of never-ending technology to operationalize their data. So how do you feed this appetite? Regardless of your company size or data & analytics maturity, you need to feed it through a well-coordinated "ecosystem" of capabilities enabling value from the wisdom found in data.
Corwin will walk through how to successfully implement and mature the capabilities enabled by the people, process, and technology necessary for operationalizing data & analytics while continuing to feed leaders a steady diet of understanding of what has happened and what will happen so they can make critical business decisions today.
4:20 pm - 4:40 pm
In this talk, Target's Senior Director of AI, Subramanian Iyer, will describe the challenges faced in developing demand forecasts at scale in the retail business and in securing adoption of these forecasts as well as improving collaboration among data science, product management, and forecast user teams.
Competition for top new analytics talent is fierce. While tech and other corporate giants are indeed vacuuming up new grads from top schools, not all great students can or want to go that route. The challenge is to put yourself in a position to attract and land them. It can be done, even if you're not a well-known brand. In this session, you will learn what works from a leader of the analytics program ranked #2 in the world by QS for the past three years. Even if you are from a giant firm, you're still competing for talent. You will come away with ideas to help you gain an advantage!
MLOps provides a managed and optimized workflow for training, deploying and operating machine learning models and applications. More and more, ML models are now deployed at the Edge for use cases in manufacturing, logistics, healthcare and smart homes and cities. Edge deployments bring in unique challenges like resource constraints, limited bandwidth and unreliable networks. Existing cloud & enterprise MLOps needs to be adapted for Edge to overcome these constraints and maximize efficiency. This presentation will cover the unique challenges for machine learning and inference at the Edge, techniques and processes to overcome these challenges and how the MLOps workflow can be modified to adapt them. Data Scientists, MLOps Engineers and architects will benefit from understanding the unique challenges and best practices for ML at the Edge.
4:45 pm - 5:05 pm
Many organizations today are investing significant amounts of money in digital transformation, data analytics, and artificial intelligence (AI) to drive their business forward. However, despite these investments, many organizations struggle to execute their plans effectively and see meaningful return on investment (ROI).In this case study, Dr. Jennifer Schaff will present a framework for building a multi-year advanced analytics capability for a Fortune 500 Consumer Packaged Goods (CPG) company that had no centralized analytics capability and lacked a long-term data strategy. Prior to the implementation of the framework, the company's investments in data and analytics were haphazard and disjointed from ongoing initiatives across the enterprise.By implementing this framework, the company was able to realize significant savings in time and money and capture new customers, resulting in an ROI of more than 10x. Dr. Schaff will provide insights into the three key considerations that were critical to the success of this initiative and share best practices for implementing similar initiatives within your own organization.
5:10 pm - 5:30 pm
Based on case studies drawn from three industries – utilities, smart buildings/IOT, and logistics – Steven will show how following a repeatable development process helps get models consistently into production. The key takeaways include:
- Planning an AI/ML project: from finish to start
- How to work like a consultant and detect “actionable pockets”
- Learn how accuracy, relevancy, and ease of implementation will make or break your deployment… in unexpected ways.
The session will use examples in churn modeling, NLP, and predictive optimization to illustrate key points.
Predictive Analytics World for Business - Las Vegas - Day 2 - Wednesday, June 21st, 2023
Join us for a dynamic and entertaining keynote session on Data Storytelling with Gulrez Khan, Data Science leader at PayPal. Gulrez is known for infusing his presentations with humor and personal stories, making the learning experience both engaging and enjoyable. You'll be inspired by Gulrez's insights and experience as he guides you through the process of turning numbers into narratives. Discover how to craft compelling stories that bring your data to life, and learn how to share your insights in a way that engages, educates, and inspires your audience.
Enjoy some machine learning laughs with Evan Wimpey, a predictive analytics comedian (and we're not just talking about his coding skills). No data topic is off-limits, so come enjoy some of the funniest jokes ever told at a machine learning conference.*
* Note the baseline.
10:05 am - 10:25 am
While there is a lot of talk about the need to train AI models that are safe, robust, unbiased, and equitable - few tools have been available to data scientists to meet these goals. This session describes new open-source libraries & tools that address three aspects of Responsible AI. The first is automatically measuring a model's bias towards a specific gender, age group, or ethnicity. The second is measuring for labeling errors - i.e. mistakes, noise, or intentional errors in training data. The third is measuring how fragile a model is to minor changes in the data or questions fed to it. Best practices and tools for automatically correcting some of these issues will be presented as well, along with real-world examples of projects that have put these tools for use, focused on the medical domain where the human cost of unsafe models can be unacceptably high.
10:30 am - 10:50 am
Data scientists find insights that help lead to better understanding and better outcomes. When clients and managers come to us for help (and even when they don’t), we want to share our advice. While we should be free to share our recommendations, we need to be clear about what the data is telling us and what is based “only on our judgment”. Gelman, et. al. wrote “As we have learned from the replication crisis sweeping the biomedical and social sciences, it is frighteningly easy for motivated researchers working in isolation to arrive at favored conclusions—whether inadvertently or intentionally.” One senior business leader I know said, “If you have data, great; if we’re just going on intuition we can use mine.” This presentation will go through a number of examples and talk about the line between the data and judgment.
By 2025 more than 465,000 petabytes of data will be collected on a daily basis across the globe, however, only a fraction of a percent of this data is considered to be useful for analysis and models. In order to locate the most useful attributes, feature engineering is a vital skill for all data scientists and analysts. But how does a data scientist decide how to engineer the right features?
In this session, Brandon Southern explores:
- A framework for thinking like a business owner
- Developing a product and customer mindset
- Building new features that have saved millions of dollars in annual costs for top organizations
In the wake of the Pandemic, many corporations have faced unique challenges regarding employee engagement and retention. Over the past the past three years, as a society, we have witnessed mandatory work from home and lock downs, The Great Resignation and high turnover, and mandatory return to office. Each of these events have had their own consequences on employee engagement, well-being and happiness, and voluntary attrition. These events coupled with events such as mergers and acquisitions can amplify the effect of turnover in an organization. In this study we assess the internal and external factors leading up to voluntary attrition under such circumstances using predictive modeling.
Thought leaders in machine learning Dean, Karl and Steven, field questions from the audience about strategies for machine learning projects, best practices, and tips, drawing from their decades of experience as consultants and company executives.
Did my data change after a certain intervention? This is a common question with data observed over time. Classical statistical and engineering approaches include control charts to see if the series falls outside of the normal boundaries of expected data. A Bayesian approach to this problem calculates the probability that the data series changes at every point along the series. Bayesian change point analysis allows the analyst to evaluate a whole series and look where the highest probability of change occurred. Has the financial asset lost value after the recent financial report? Are the healthcare outcomes at this hospital better after our new process to help patients? Did the manufacturing process improve after upgrading the machinery? All these questions and more can be answered with these techniques.
11:15 am - 11:35 am
People analytics lives at the intersection of statistics, behavioral science, technology, and the people strategy. To succeed in business, you have to understand and value people. All successful companies revolve around human needs. Business insights and decisions about human capital such as who to hire, what to pay them, what benefits to provide, whom to promote, and many more have a considerable unseen impact on the company's ability to meet customer needs, bottom-line performance, and reputation. Thanks to the prevalence of human resource information systems, plus the wide-scale accessibility of modern data collection, analysis, and visualization tools, human resources-related decisions can be made with data just like countless other business decisions.
11:40 am - 12:00 pm
The focus of this session is on the use of machine learning in the workplace. This session will begin with an overview of the myriad ways machine learning (ML) is used in the workplace. We will provide practical insights into how machine learning can assist employers with workforce analytics and decision-making. We will flesh out the legal challenges and potential legal risks that may be associated with using machine learning in the workplace. The speakers will also provide an overview of current and pending laws impacting the use of ML in the workplace and attendees will learn about the considerations and risks employers are weighing when deciding how to use ML in their workplace. The speakers will also highlight common themes and questions that employers raise when considering ML vendors and partners. The speakers will provide real life examples of how some employers have successfully utilized ML in legally compliant ways and the lessons learned from other employers who have seen mixed results.
ML’s great strength is that example cases are all you need to create a predictive model. The predictions work as long as the underlying process is not tampered with. But clients usually seek more: they yearn to understand the "data-generating machinery” in order to improve the outcome that the model predicts. Yet, this is dangerous without additional external information, including the direction of influence between variables. This talk illustrates how to achieve “peak interpretability” by using influence diagrams to model causal relationships, avoid mistaking correlation for causation, and quantify how outcomes will change when we manipulate key values.
Model explainability or interpretability is often demanded, posed as a requirement. But not always. Under what circumstances is it pragmatically necessary (or even legally required) -- and, when it is called for, what exactly does it mean? Would it suffice to explain each model prediction by showing what differences in inputs would have changed the prediction? Or must the model be "understood" globally? Is understanding how the model derives its predictions sufficient, even without the why -- that is, without causal explanations for the correlations it encodes?
Join this expert panel to hear seasoned experts weigh in on these tough questions.
3:30 pm - 3:50 pm
While the role of the CDAO has gained popularity and spread across industries, there still exists a lack of clear expectations as organizations struggle to deliver business value from their data investments. It is well-documented that the first 90 to 180 days is critical for any leader, and this pressure is even more acute when there’s a mismatch between organizational aspirations and reality when it comes to analytics maturity. Join Brian Sampsel, VP of Analytics Strategy at IIA, as he explores best practices for data and analytics leaders who are new in their role and practical guidance in measuring your D&A efforts beyond the honeymoon period.
3:55 pm - 4:15 pm
As more products incorporate ML, engineering leaders face a unique challenge. How do I incorporate ML experts into my organization? How can I support ML expert growth and development and drive the largest impact with my ML team?
In this talk we discuss how the most effective organizations use ML in their product development cycles. First, we discuss how business leaders can counterbalance the uncertainty of ML workflows with project management processes that maximize impact and reduce risk. Next, we explore centralized and embedded ML teams and learn how leaders can optimize the cross pollination of ML and domain expertise.
The Data-verse is an ever expanding space. It is the task of data science to explore the fundamentals of this space. The tools and techniques being employed are having incredible impact across many different business verticals. One interesting area of exploration is to take concepts from Physics and apply them to data science problems. Physics at its essence focuses on reducing the complex physical world into a set of fundamental laws or axioms that govern it. In this talk, I will explore examples from 3 different business use cases where Physics served as inspiration in solving complex problems. The first example comes from marketing and product recommendations and adopts the concept of time dilation from relativity to solve a low recommendation problem. The second comes from IOT GPS transportation devices and uses the concept of spacetime and equivalence classes to characterize motion behavior. The last concept comes from logistics and developing an estimated time of arrival model using a Gradient Boosting Regressor. This focuses on using a predictive motion for model convergence.
Predictive analytics can be applied to pricing car insurance to help insurance companies determine appropriate rates for individual policyholders. By analyzing large datasets of historical insurance claims and policyholder data, predictive models can be developed to estimate the likelihood of future claims based on various factors such as age, gender, driving history, location, credit score, and type of vehicle. More recently, insurers have also been able to use the rich datasets from telematics sensors. This session will provide an overview of how predictive analytics can be applied to pricing car insurance to help insurance companies make more informed pricing decisions and reduce risk, ultimately leading to improved profitability and customer satisfaction.
Tata Communications is a world class leader in international wholesale telecommunications. We have built proprietary tools to manage voice and messaging transactions including routing calls/messages to our thousands of suppliers. One such tool allowed us to make better routing decisions based on ML prediction models for both incoming attempts and performance of suppliers. The tool quickly allowed Tata Communications to become an efficient and profitable wholesale provider and outperform our competitors. However, the implementation of this tool had serious hurdles to overcome, not the least of which was commercial teams' adoption. In this talk, Mike Lawrence, Director of Business Intelligence, Operations and Transformation will cover the challenges faced and how the company overcame them in gaining support from Sales and Operations teams.
Today's AI tools can produce highly accurate results, yet advanced machine learning models often remain a black box. Transparency is essential when businesses are relying on AI more than ever. Explainability needs to be part of the equation when building AI systems we can trust. Organizations should start by including explainability as one of the key principles within their responsible AI guidelines by applying FEAT (Fairness, Ethics, Accountability and Transparency) for building responsible AI products. Through this talk, I will discuss how we can benefit from AI and minimize risk by making ML models explainable for businesses.
This discussion focuses on application of AI and machine learning technologies in health monitoring of jet engines and aircraft components. Specifically, how AI technologies are pushing the envelope and changing the way we traditionally thought about engine monitoring and fleet management. This talk will highlight how Physics-based/business understanding and AI driven techniques must come together to drive differentiated outcomes for airline customers. The talk will also present examples on how to combine both structured and unstructured data for predictive maintenance using AI technologies. The presentation will conclude with a section on GE aviation’s lessons learned in the area for the past 10 years.
Workshops - Thursday, June 22nd, 2023
Full-day: 8:30am – 4:30pm PDT
Python leads as a top machine learning solution – thanks largely to its extensive battery of powerful open source machine learning libraries. It’s also one of the most important, powerful programming languages in general.
Full-day: 8:30am – 4:30pm PDT
This one-day session reveals the subtle mistakes analytics practitioners often make when facing a new challenge (the “deadly dozen”), and clearly explains the advanced methods seasoned experts use to avoid those pitfalls and build accurate and reliable models.
Full-day: 8:30am – 4:30pm PDT
Generative AI has taken the world by storm, scaling machine learning to viably generate the written word, images, music, speech, video, and more. To the public, it is by far the most visible deployment of machine learning. To futurists, it is the most human-like. And to industry leaders, it has the widest, most untapped range of potential use cases.
In this workshop, participants will get an introduction to generative AI and its concepts and techniques. The workshop will cover different techniques for image, text, and 3D object generation, and so forth. Participants will also learn how prompts can be used to guide and generate output from generative AI models. Real-world applications of generative AI will be discussed, including image and video synthesis, text generation, and data augmentation. Ethical considerations when working with generative AI, including data privacy, bias, and fairness, will also be covered. Hands-on exercises will provide participants with practical experience using generative AI tools and techniques. By the end of the workshop, participants will have a solid understanding of generative AI and how it can be applied in various domains.