Should you normalize audio?

Normalization is a word that gets thrown around frequently in the music industry. If you have heard it before, you would know that it is associated with the volume of the sound.

There is always conflict between producers and artists about how loud a song should be. Artists always want their songs to be louder than their competitors when listened through a set of speakers. This has resulted in the whole industry making a pushing towards more loudness.

When you look at various genres of music, normalization is seldom the first thing that people think about in making their music loud.

Every time loudness is mentioned the only thing that they talk about is limiting. On the other hand, normalization can be an awesome tool to perfect the track and help push the song to its absolute limit.

There are times where the process of normalization is brought into question, as it can results in noise and distortion. The best way to use normalization is by finding the places it’s most required, and eliminating the other traditional mixing techniques, which will be explained in this article in more detail.

Yes, you should absolutely normalize your audio when it’s required in mixing a song. This increases the optimal volume throughout the song, thus pushing the song’s volume and making it consistent. Keep in mind that you should not apply limiting to your songs after normalization, as this will result in distortion.

Just knowing the definition of normalization is one thing but understanding how it works and how it can be used as an effect, to improve your songs, will result in you using it much more in your music production efforts.

In this article I will walk you through various ways normalization is used in the audio production industry, and how you can take advantage of it. Let’s get started.

What is normalization? 

Normalization is a process that raises the level of any audio to a pre-defined maximum without clipping. When audio is normalized, your software will search the file for a peak level, and move this to the defined maximum level.

Clipping is a music producers and audio engineers worst enemy. This is because clipping results in the removal of actual content and detail from the song.

Normalization is one of the most important techniques to learn when it comes to mastering a song. It is one of the oldest techniques used to make a song louder and clearer in any speaker system. Then we have new techniques such as limiting and clipping.

Limiting and clipping are used extensively in today’s world, whereas normalization is being used less and less.

People are using digital software to mix and master their songs. The use of hardware normalization has also started to diminish over the last 20 years.

One of the reasons people do not use normalization is because you cannot push the song too much. Normalization is about not clipping the song, not even a little, but rather making it sound loud. This makes this process ineffective when you want really loud sounds.

This is why limiting is preferred over normalization in today’s music production scene? Even digital audio productions and film compositions makes use of limiting instead of normalization in the production work.

Still, normalization has its time and place in the industry when it comes to mixing a song. There are so many places and circumstances where it is better to normalize the song instead of limiting or clipping it.

How does normalization work? 

Normalization is a process that helps to completely avoid clipping in a song. Sometimes, when some songs are composed they might exceed the volume limit that is required for mastering.

When this happens producers try to reduce the volume of individual elements, or if that is not possible they will try to compress and make it fit in the mix.

Sometimes both will not be possible, as the track will start to clip. This is where the normalization effect comes into play. It helps to eliminate clipping by reducing the dynamic range between the quieter and louder sounds.

Normalization was first developed in hardware systems such as loudspeakers to increase the volume at the end of delivery to make it sound loud. Later it started to make its way towards production systems.

Normalization became the go-to tool for producers in the late ’80s and 90’s to increase loudness in a song. This is why most songs from the ’80s and ’90s don’t have any dynamic range.

This all changed the moment software instruments and audio production software came into the spotlight. They completely changed the way mixing is done and put normalization to rest.

This has been the case over the last twenty years. Still, normalization is used to make individual tracks sound loud and crystal clear without clipping inside digital production systems.

Disadvantages of normalization 

The first disadvantage that you will face when you add normalization into the mix is not being able to push the song to its limits in terms of the loudness level according to today’s streaming standards.

Normalization techniques are used in hardware systems and are not on par with the systems of limiting and clipping.

Limiting and clipping offer more to the mixing and mastering engineer in terms of making the song sound loud when compared to normalization. The only place where normalization makes more sense is when there is a little place to improve the loudness of the track without clipping and causing distortion.

This is one place where you can apply normalization to boost the quieter parts of the song and get it to be par with the other areas in the song.

Another disadvantage is that, whenever you have a song that has very low headspace, you cannot boost it with a limiter as you will lose detail. This is where you would expect a normalization is a tool that can come to the rescue.

This is not always the case in all situations, because normalization will take away the ability to increase RMS and reduce the dynamic range in the song making it less interesting.

What is the best DB for music? 

The music that we listen to on streaming platforms like Spotify, apple music is at about 0db RMS at the highest and doesn’t cross that limit.

In terms of loudness, the value is around -14LUFS. LUFS is a unit used to measure the loudness of a sound on digital platforms.

Should I normalize before mastering? 

It depends on how the song is mixed at the moment. If the song sounds open and has lots of low-volume regions it would be best if the song is normalized.

If the song has already been pushed to its limits in mixing, it’s better to skip normalizing and go through with the mastering.

Why do we normalize?

Normalization is done to make sure that every element in the song is made to sound balanced.

This is a process that eliminates very quiet regions and brings their volume to a level that matches the loud portions of the song.

This can be very useful as a beginner in music production in that it helps to balance a song so that it sounds good in any speaker system.

Why does my Spotify Sound weird?

There are many reasons your uploaded song sounds weird when playing it on Spotify. One of the main reasons is the normalization or limiting that occurs in the Spotify online platform, which pushes your song to its limits.

This causes distortion and loss of quality. You will not hear the same sound, you heard in your audio production software. This is why the base RMS value should be around -14LUFS.

Conclusion

Normalization is an important technique in the sphere of music production and mastering. When it comes to mixing an audio track, it’s very important to go through everything you have learned in mixing and mastering.

Whether you use a technique or not using a comes down to trial and error. This is unfortunately the best way to learn.

Through experience, you will understand which one works and which one doesn’t.

If something is visual it can be easier to explain. When it comes to audio however it’s really hard to point to things and say if this happens to do this.

The best way to learn normalization is to do it in every track and listen to what it does to the track.

If you are starting as a mixing and mastering engineer, start by learning effects, and plugins before you move on to advance functions like normalization and limiting.

Leave a Comment