As the field of machine learning (ML) continues to evolve and its impact on society and various aspects of our lives grows, it is becoming increasingly important for practitioners and innovators to consider a broader range of perspectives when building ML models and applications. This desire is driving the need for a more flexible and scalable ML infrastructure.
At Spotify, we strongly believe in a diverse and collaborative approach to building ML applications. Gone are the days when ML was the domain of only a small group of researchers and engineers. We want to democratize our ML efforts such that contributors of all backgrounds, including engineers, data scientists, and researchers, can leverage their unique perspectives, skills, and expertise to further ML at Spotify. As a result, we expect to see an increase in well-represented ML advancements at Spotify in the coming years — and the right infrastructure will play a crucial role in supporting this growth.
Spotify founded its machine learning (ML) platform in 2018 to provide a gold standard for reliable and responsible production ML. As an ML platform team, we aim to empower our users to spend less time maintaining bespoke ML infrastructure and more time focusing on solving business problems through novel model development.
Our centralized infrastructure now serves over half of our internal ML practitioners and ML teams. Internal research has shown, however, that our platform tools aren’t currently perfectly suited for all dimensions of ML practitioners. While the majority of our ML engineers use our centralized tooling, fewer data and research scientists do. We believe solving the following user needs can help all dimensions of ML innovators at Spotify:
To continue reading this article, click here.