Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
ML in the Spotlight: Trailblazers with Walter Isaacson Covers Predictive Analytics
 Predictive analytics got another public spotlight and Machine Learning Times Executive...
ChapGPT Doesn’t “Know” But It Can Tell
  Polanyi’s paradox, named in honor of the philosopher...
Take the 2023 Rexer Analytics Data Science Survey Now
  Rexer Analytics and Machine Learning Week launch 2023...
Three Ethical Issues Related to Credit Scores
 A reasonable credit score and its accompanying benefits provide...
SHARE THIS:

2 months ago
Blueprints for Recommender System Architectures: 10th Anniversary Edition

 
Originally published in Amatriain, Jan 29, 2023.

Ten years ago, we published a post in the Netflix tech blog explaining our three-tier architectural approach to building recommender systems (see below). A lot has happened in the last 10 years in the recommender systems space for sure. That’s why, when a few months back I designed a Recsys course for Sphere, I thought it would be a great opportunity to revisit the blueprint.

In this blog post I summarize 4 existing architectural blueprints, and present a new one that, in my opinion, encompasses all the previous ones.

At a very high-level, any recommender system has items to score and/or rank, and a machine learned model that does that. This model needs to be trained on some data, which is obtained from the service where the recommender operates in some form of feedback loop. The architectural blueprints that we will see below connect those components (and others) in a general way while incorporating some best practices and guidelines.

The Netflix three tier architecture

In our post ten years ago, we focused on clearly distinguishing the components that can be executed offline (i.e. not when the recommendations need to be served but rather e.g. once a day), those that need to be computed online (i.e. when the user visits the site and the recommendation is being served) and those somewhere in the middle called nearline (i.e. components that are executed when the user visits the site, but do not need to be served in real-time). At that time, and still today in many cases, most of the big data training of the algorithm was performed offline using systems such as Hadoop or Spark. The nearline layer included things like filtering in response to user events, but also some retraining capabilities such as e.g. folding-in and incremental matrix factorization training (see here for a practical introduction to the topic).

To continue reading this article, click here.

2 thoughts on “Blueprints for Recommender System Architectures: 10th Anniversary Edition

Leave a Reply