Originally published in MIT News, September 18, 2019.
Rhythmic flashes from a computer screen illuminate a dark room as sounds fill the air. The snare drum sample comes out crisp and clean by itself, but turns muddy in the mix, no matter how the levels are set. Welcome to the world of modern music-making — and its discontents.
Today’s digital music producers face a common dilemma: how to mesh samples that may sound great on their own but do not necessarily fit into a song like they originally imagined. One solution is to find and audit dozens of different samples, a tedious process that can take time to finesse.
“There’s a lot of manual searching to get the right musical result, which can be distracting and time-consuming,” says Justin Swaney, a PhD student in the MIT Department of Chemical Engineering, a music producer, and co-creator of a new tool that uses machine learning to help producers find just the perfect sound.
Called Samply, Swaney’s visual sample-library explorer combines music and machine learning into a new technology for producers. The top winner at the MIT Stephen A. Schwarzman College of Computing Machine Learning Across Disciplines Challenge at the Hello World celebration last winter, the tool uses a convolutional neural network to analyze audio waveforms.