Are You Listening? S2 Ep. 4 | Mastering for Spotify® and Other Streaming Services
Mastering

Are You Listening? S2 Ep. 4 | Mastering for Spotify® and Other Streaming Services

Matthew Vere
Matthew Vere

Table of Contents

How is Mastering for Streaming Services Unique?

Lossy Codecs discard part of the musical information in a track to reduce file size, often resulting in a reduction of high and low frequencies.

We need to think strategically in preparing masters that are going to get turned into lossy files.

Another consideration is level, is the listener going to playback the audio peak normalised or loudness normalised?

When you own a piece of music whether it be on a CD, Vinyl, or as a file, it exists at whatever resolution and level you bought it at. With streaming, audio is uploaded to a service then distributed across different playback mediums, whether it be on a laptop, a media player, a TV or a phone and tablet. Each stream to every one of those devices may be different, and each streaming service may handle the audio slightly differently. If you’re out in the wild listening on your phone, the streaming service may reduce the bandwidth, and you may hear a modified version of the audio. If you have a poor connection, the service may throttle the audio down to a mono playback.

You don’t master for each streaming service in every playback environment. That would be insane.

Preparing Your Master for a Streaming Service

A natural side effect of converting a lossless format (WAV) to a lossy codec (MP3) is an increase in the peak level. There is a 0.5dB difference in the example given when a WAV is converted to a 192 Kbps MP3 file. It would be best to prepare your masters for this when they go out to streaming services by setting the peak to -1dB. The -1dB ceiling helps reduce distortion and intersample peaks when the file is converted.

Loudness Implications for Streaming

You can’t anticipate if the listener will be listening with loudness normalisation turned on or off, in their streaming service of choice. We don’t know if a listener will go from your song to the next with the level matched, or not. Because of these, the considerations then become:

Do I have to push my track up so that it’s going to sound as loud as the next song? If I increase the level, then the streaming service turns it back down, is my track going to sound worse? Have I now made the level so high that it’s damaged the audio, and then when it gets turned down, that damage becomes even more apparent?

Many use loudness normalisation as an argument for not pushing level at all. Johnathan thinks that’s a mistake. If we use any standard playback level as an arbitrary way of defining the artistry of our work, we run the risk of making mistakes or at least not making tracks sound as good as they could otherwise be.

Make a track sound as good as possible, at the highest level it works best at.

There’s no single number Johnathan uses; it varies from artists and genre.