Kpopalypse’s music theory class for dumbass k-pop fans: part 17 – introduction to acoustics

It’s the return of the Kpopalypse music theory series!  This post is all about acoustics!  So, what is acoustics, and why should you care?  This post probably has the answers!

So it’s time for an introduction to acoustics.  If you go and Google search acoustics you’ll find that it is technically “the science of sound”, but that’s a really unhelpful description as it could honestly apply to just about anything.  If we want to be really broad about it, literally every post in this entire series could be defined as “acoustics” with such a definition.  There’s also various sub-fields of acoustics, such as bioacoustics, underwater acoustics, architectural acoustics and so on, so the term “acoustics” gets messy.  When I talk about acoustics for this post I’m just going to trim the topic down to the kind of things that you might reasonably give a crap about in your day to day life, and that may impact your ability to either make music yourself or enjoy your favourite stannable individual making music.  I’ll have some videos and links and so forth so if you want to go deeper into this topic yourself, you can do that, but the rabbit hole does go very deep so for those who don’t want to take too much out of their Chuu appreciation time, this post will just cover the basics.

A practical definition of acoustics, for what we want to talk about, might be “the science of sound as it travels from the sound source, to your ear”.  We’ve talked a lot in this series about the actual making of the sound, but sound has to get from wherever it is generated over to your ear, so you can hear it, so it has to travel – usually through air, but sometimes through other materials as well.  Acoustics therefore concerns the most effective way to facilitate that travel, so you can hear the sound as clearly as possible.  The further the sound has to travel, the more relevant the science of acoustics becomes, because the further the sound goes, the more things can potentially happen to it along the way.  Sometimes however you don’t want to hear the sound, so in addition to this, acoustics also concerns the science of how to stop sound from being heard when you don’t want it to be heard, such as a noisy neighbour, or Itzy’s “Mafia In The Morning”.  Obviously not having the sound be generated in the first place by shutting off the signal at the source is the best option for dealing with unwanted noise, but sometimes this option isn’t available so then we’re looking at the science of preventing sound traveling from where it is generated, to where you or others might be.  You can practice your knowledge of best-practice acoustics by not clicking the video below.

We also may want sound to be heard in some places but not other places.  The most obvious example here is in a recording studio.  A recording studio is usually split up into three areas.

  • The control room, where all sound should only be that which is coming from the speakers inside the control room, as controlled by the audio engineer
  • The recording room/s, where sound from instruments and vocals are recorded, but also ideally completely isolated from the control room, and sometimes also other instruments being recorded at the same time
  • The other rooms such as kitchen, lounge, toilet etc, where drugs are consumed and any sound is permissible because everyone is on drugs so it doesn’t matter, even “Mafia In The Morning” is probably acceptable in this context

The knowledge of acoustics can help someone building or designing a studio achieve the desired outcome of sound isolation, when it is wanted.  So let’s talk about how sound travels and how that might impact us if we were to attempt to build a soundproof environment, as a starting point for introducing some of the ideas of acoustics. 

As previously discussed in this series, all sound is just vibration.  The way we hear sound is that sound is created, it vibrates in the air, if that air is where we are it also vibrates the hairs inside our ears, then our brain senses these vibrations and does some magic special stuff to it and there we go, sound happens.  It’s helpful to know that vibration is measured in Hertz (Hz), 1 Hz equals one vibration per second.  1 Hz isn’t enough to produce an audible sound however, for sound to be audible we need vibrations of at least 20 Hz (twenty vibrations per second) and at most 20,000 Hz which is the general upper limit of human hearing (assuming ideal conditions).  This is why we can hear the wings of hummingbirds but not other types of birds, most hummingbirds can flap their wings at 20 times per second or over, therefore producing a vibration within the range of human hearing.  With other types of birds the vibration is still there, it’s just not fast enough to produce an audible hum.

However, sound doesn’t just travel through air.  Because sound is just vibration, and anything can be vibrated (just ask Ichika in the header), this means that sound can in fact travel through anything at all.  Usually it’s air that matters the most because generally speaking the hairs in our ears are usually surrounded by air so that’s where they’re most directly getting the sound from – but say for instance if we were to dive into a pool of water, then our ears would be immersed in water and that water could still carry the sound vibrations to our ears.  We wouldn’t normally listen to sound underwater, but I mention this because it’s just a way to prove that sound vibration can move through different substances, so for example if you were on the set of a terrible k-pop summer comeback video shoot, diving into the on-set swimming pool may not block out as much of the sound as you might hope. 

Where this becomes relevant is when there is a substance between us and the sound source, besides the air, such as the wall of a room.  Even if a sound is being produced in a separate room, and there’s no air gap to transfer the vibration easily from the sound source to your ear (such as an open window) some sound may still penetrate through the wall to the other side.  This may depend on factors such as:

* the intensity of the sound (loudness)
* the frequency of the sound (pitch)
* the material of the wall
* the thickness of the wall

To work out how these factors interact, let’s firstly talk about pitch and volume.  Pitch of a sound is determined by speed of vibration (Hz).  Loudness of a sound (dB) is related to strength of vibration – louder sounds don’t vibrate faster than softer sounds, but they vibrate wider, which means the molecules move further.  We can chart this like the following:

Along the top of the graph on the horizontal axis, we have time, measured in seconds, so here we are looking at two seconds of sound.  Under this, we have graphed two different waveforms, so we have two different sounds which are happening at the same time.  The top waveform is vibrating at a frequency of 20 Hz (twenty times per second) and at a volume level of 0.8 (80% of the volume of whatever our playback device is set to).  The bottom waveform is vibrating at a frequency of 50 Hz (fifty times per second) and at a volume level of 0.5 (50% of the volume of whatever our playback device is set to).  The bottom waveform therefore has faster vibrations than the top waveform, but they are also not as loud.

Keeping things as simple as I can (because this can get very complicated) it works like this:

Sound has a speed at which it travels.  Through dry air it’s 343 meters per second (ms) but this can change depending on temperature, humidity and other factors.  Through knowing the speed of sound through air, we can calculate the wavelength of the sound, through the formula:

λ = v / f (wavelength = velocity divided by frequency)

If we’re calculating the wavelength of a tone of 50Hz, that means our formula is:

343 ms / 50 Hz = 6.86 meters wavelength

Of course most sound isn’t just made up of one single frequency, but a multitude of frequencies.  Higher frequencies have shorter wavelengths and lower frequencies have longer wavelengths, so to absorb all sound we’re looking at the following range:

343 ms / 20000 Hz = 0.01715 meters (1.75cm) wavelength

343 ms / 20 Hz = 17.15 meters wavelength

Sound waves lose energy as they move through different materials, as the molecules of the different materials rub up against each other.  If a material is thicker than the wavelength of the sound, then the entire sound will be absorbed.  It’s helpful to use the “bouncing ball” analogy when thinking about how sound absorption works.

Since lower frequencies have longer wavelengths (think of a bigger bouncy ball), this makes them harder to absorb, which is why when your annoying sibling plays stupid music that you hate in his or her room with the door shut, you can hear the bass but not the treble, the door and walls are absorbing the high frequencies but not the low bass, which is just going straight through everything.  To absorb all sound within the human hearing range completely, not just the high stuff but the low stuff as well, you would theoretically need a wall or door that’s 17.15 meters thick – which is obviously impractical.  However there’s some caveats which make the problem of isolating sound easier.

Firstly, not much music actually gets down quite as low as 20 Hz, so perfect absorption all the way down to this frequency may not be required.  While some k-pop songs do make use of very low bass (sub-bass), large amounts of subs are actually the exception rather than the rule, and a sub-bass right down to the level of 20 Hz would be unusually low, because that’s the very lower limit of human hearing and there’s not much good having sound on a recording right down where it can barely even be heard at all and where a lot of speaker systems can’t even reproduce it all that well.

Next, perfect absorption may not be required anyway given that most people don’t have perfect hearing.  Consider the dB scale and what the rough equivalent is in human hearing.

There’s a lot of charts out there like this, but this is a particularly good one because distance from the ear to the sound sources is specified – a key point that most dB rating charts miss.

The average person probably would struggle to detect much of a difference in volume between a quiet library at 40 dB and a quiet bedroom at 30dB, that’s because the dB scale is logarithmic rather than additive, so the changes only get severe when you get to the top end of the scale.  Also if you’re someone who has a little bit of natural ringing in your ears (which is pretty normal especially as you get older, but can still affect you if you’re young depending on how much abuse you subject your ears to) you might not be able to hear much of anything below 30dB.  So muting that 100dB sound from your favourite k-pop disco comeback so it sounds like a 30dB disco comeback might be enough.

Another important point is that sound loses energy as it moves from one surface to another.  Sound loses energy as it travels through any substance, no matter what – that’s why getting further away from a sound source makes it sound quieter – but when sound moves from one type of material to another a large amount of energy gets lost in the transition.  Each time this process happens there’s the potential for energy to be lost, which is why most recording studios use “floating” rooms where there’s space under the floor, and windows between the control room and the recording room with two layers of glass, with air in between them.  This means that the sound has to do four transitions of energy to get from one side of the barrier to the other, and loses a bunch of that energy during each step of the process, this is much more effective sound absorption than just a single barrier of one material.

So with all these factors in mind, sonically isolated rooms are very possible, depending on the material you build your rooms from.  Here’s a chart of absorption coefficents of different materials.

Absoption coefficients of different materials, approximate as this is also frequency-dependent.  A higher number means the material can absorb sound more effectively, with 1 representing maximum absorption.  Source/further reading here.

Absorbing sound isn’t just done because we want to isolate different rooms sonically and/or be kind to the neighbours however, it’s also important because different rooms have different sonic characteristics and these can have an impact on any sound created in that space, so we want to be mindful of the nature of the sounds created inside any room used for music also.  To this end, different surfaces not only have different absorption properties, but also different reflection properties, and this needs to be taken into account when thinking about acoustic spaces and their impact on sound.  To demonstrate this, listen to the man’s voice in the below video at the very start when he’s speaking inside a concrete room, and then skip to near the end of the video after he’s built a floating studio mixing room inside that same concrete room, the characteristics of his vocal tone are very different, because the concrete is reflecting a lot of the sound from his voice, whereas the wood and foam panels are reflecting relatively little sound.

In the above example the studio builder hasn’t just added deadening materials, but also made an effort to have uneven surfaces which reflect sound on an angle, and even has built the walls slightly uneven on purpose.  Bouncing the sound around at odd angles increases the deadening effect, because sound waves that bounce at 180 degree angles can reinforce each other, but this effect is less pronounced at more irregular angles.  Here’s a picture of Twice’s Jihyo in the studio vocal recording room (from this video):

We can see that there’s two panes of glass between the vocal room Jihyo is in and the control room.  These panes have a huge air gap in between them, and they’ve also been set at an angle, so they’re not parallel to each other.  The room itself also doesn’t have 90 degree walls but filled in corners, this is also to provide different ways for the sound to bounce around so that frequencies aren’t reinforced.  Below I’ve highlighted the angle of the glass panes, and the corner wall, to make this clearer.

That’s not to say that we always want rooms to have no echo – but we usually do.  In the early days of studio recording, rooms for recording actually had harder surfaces to deliberately generate some echo, to make the instruments or singers in them sound ‘sweeter’.  The top studios in the 1940s and 1950s had acoustic environments which gave just the right amount of echo to make a track sound nice, and these studios charged top dollar to recording labels and artists for the use of their rooms, because back then that was the only way to generate such an effect.  However as technology advanced, devices that could produce echo and reverberation electronically were created, and the “live-sounding room” for recording fell out of favour, it was considered better to have an acoustically “dead” room (one with little or no echo) and then the echoes, if needed, could be added to the mix later.  If you want to know what a “live room” sounds like, just go into your own bathroom or shower at home – these days when a “live room” is actually needed for a recording, studios that don’t have a specific “live room” with hard reflective surfaces for recordings will actually often send singers into the studio bathroom to record their parts!  However it’s much more common practice these days for someone building a studio to make the room as “dead” as they can make it, and just digitally add the effects later if they want them.  The below video has some demonstrations of the effect of adding various dampening to deaden a room and also deflect sound waves.

The field of acoustics isn’t just relevant for recording studios, but also when a group is performing on a live stage.  Part of the reason why old churches were built completely from hard reflective surfaces is so when people speak or sing, the echoes helped carry their voices through the hall, and the same also applied to any instruments in use, it certainly gave those church pipe organs an extra ethereal quality.  Old opera houses were also built with the same style of acoustics for the same reason – the echoey reflective hall allowed the sound to carry out to the audience easier, which meant that the singers didn’t need to push their voices as hard.  With the advent of modern PA systems that could amplify vocals and instruments electrically, this was no longer needed, so modern theaters have carpet floors and acoustic treatment to dampen reflections from the live sound system to prevent the echoes reinforcing the PA and creating feedback.  Modern churches also have much softer acoustics than churches built hundreds of years ago, because churches these days are more likely to host amplified rock bands.

Acoustics for venue design gets pretty involved.  Here’s a video that goes into more depth on this topic:

What the above video demonstrates is that it’s not just about absorption vs reflection – when sounds reach an audience member is important, and sounds may reach the audience at different times depending on where those sounds are coming from, i.e directly from the PA system, or a room reflection etc, and also depending on where the audience member is sitting or standing in the audience.  Sometimes moving your position in the crowd to a different location can make a big difference to your audio quality (best place is usually by the mixing desk, as the audio engineer is usually mixing it so it sounds good from their position, and hoping that it also sounds good everywhere else).  Sound travel time and reflection also matters from a performance perspective.  I played a show recently where behind the seating at the rear of a venue was a flat wall that was completely parallel to the stage, and every time I played a note, I could hear that note bouncing back to me half a second later from the wall behind the audience, needless to say this phenomenon (known as “slapback echo”) was a little distracting.

In general, the larger the venue, the harder it is to get acoustics right.  Theater acoustics are usually decent, but where venue acoustics are often much more compromised is in multi-purpose stadium venues.  A venue that needs to host a sportball game one day and then a music concert the next day usually doesn’t have the space to accommodate specific acoustic treatments that suit music without also creating problems for the venue’s other applications.  Stage elements in multi-purpose stadiums are designed for maximum flexibility and being suitable for the widest array of uses, and music often gets the short end of the stick here as the biggest events that pack out stadiums the most frequently are usually not music events.  So if you went and saw a big k-pop event or some other music event in a large stadium and the sound was echoey and muddy and you felt like you couldn’t hear a lot of the music very clearly, or some details sounded very upfront and others sounded like they got a little bit ‘lost in the mix’, this is probably why.

Hopefully this has been a useful introduction to acoustics for you.  A lot of stuff here has been only briefly touched on, because it’s such a big area and it gets pretty deep and mathematical, but this should be enough for you to at least know what people are talking about when they refer to acoustics in relation to both live sound and in the studio, and may also give you some things to start thinking about if you’re considering building a studio, or working in live sound.  Kpopalypse will return!