Workshop – Spark on Hadoop for Machine Learning: Hands-On Lab

Thursday, June 4, 2020 in Las Vegas

Full-day: 8:30am – 4:30pm

Intended Audience:
Analysts, data engineers, and data scientists who build predictive models with machine learning and wish to explore using Spark and Hadoop for the same.

James Casaletto
James Casaletto

PhD Candidate

UC Santa Cruz Genomics Institute and former Senior Solutions Architect, MapR

Requirements for this Workshop:

For those attending this Workshop, please note the Workshop requires you to perform the following before the Workshop starts:

  • Install a 1-node Hadoop cluster virtual machine on laptop (setup required before the Workshop begins and takes about an hour) following the instructions at this link: http://bit.ly/2Q9mKxh and your environment can be saved.

Why Machine Learning Needs Spark and Hadoop

Standard machine learning platforms need to catch up. As data grows bigger, faster, more varied-and more widely distributed-storing, transforming, and analyzing it doesn’t scale using traditional tools. Instead, today’s best practice is to maintain and even process data in its distributed form rather than centralizing it. Apache Hadoop and Apache Spark provide a powerful platform and mature ecosystem with which to both manage and analyze distributed data.

Machine learning projects can and must accommodate these challenges, i.e., the classic “3 V’s” of big data-volume, variety, and velocity. In this hands-on workshop, leading big data educator and technology leader James Casaletto will show you how to:

  • Build and deploy models with Spark. Create predictive models over enterprise-scale big data using the modeling libraries built into the standard, open-source Spark platform.
  • Model both batch and streaming data. Implement predictive modeling using both batch and streaming data to gain insights in near real-time.
  • Do it yourself. Gain the power to extract signals from big data on your own, without relying on data engineers, DBA’s, and Hadoop specialists for each and every request.

This training program answers these questions:

  • What are the particular challenges of big data for machine learning?
  • When does Hadoop provide the greatest value?
  • How can streaming data be processed in Hadoop?
  • How does one build machine learning models with Apache Spark?

Hands-on lab (afternoon session):

  • Access to data sets, working code, and hands-on exercises in Python
  • Use algorithms from Spark’s machine learning library
  • Requires installing a pre-configured, 1-node Hadoop virtual machine cluster on your laptop prior to the workshop

Schedule

  • Workshop starts at 8:30am
  • Morning Coffee Break at 10:30am – 11:00am
  • Lunch provided at 12:30pm – 1:15pm
  • Afternoon Coffee Break at 3:00pm – 3:30pm
  • End of the Workshop: 4:30pm

Coffee breaks and lunch are included.

Instructor

James Casaletto, Principal Solutions Architect, MapR Technologies

James Casaletto is a principal solutions architect at MapR Technologies where he designs, implements, and deploys complete solution frameworks for big data. He has written and delivered courses on MapReduce programming, data engineering, and data science on Hadoop. Today, he is also teaching a graduate course in these topics for the computer science department at San Jose State University.

Share This

Get Predictive Analytics World news and event information delivered straight to your inbox.