Kpopalypse’s music theory class for dumbass k-pop fans: part 7 – timbre, natural and synthesised

It’s time for another episode of Kpopalypse’s music theory class, for curious caonimas interested in musical things!  This time, Kpopalypse takes a look at timbre!  This post isn’t going to cover much music reading and notation like the last few posts have done.  This is a post about sound on a more scientific level, so there’s going to be lots of science in this post.  I hope you’re ready for lots of science!

A quick recap of some trufax: all sound is vibration of molecules, but not all vibration of molecules makes an audible sound.  For instance, a microwave oven vibrates molecules to produce heat, but you can’t hear these vibrations.  Yes microwave ovens do make a noise, but that’s the noise of the motor that spins your pizza around so it gets vibrated evenly and some parts aren’t too cold, not the sound of the actual microwaves, which are silent.

The speed at which vibrations happen is measured scientifically in Hertz (Hz) but is measured musically using musical pitches.  Apart from talking about 440Hz for orchestral tuning purposes (or 432Hz if you’re a dirty disgusting hippy who thinks that the 432Hz frequency has some magical properties or whatever, yes this is a real debate among retarded musicologists who smoke too much weed) most musicians don’t bother with thinking about vibrations frequencies too much.  The range of human hearing is generally from 20Hz to 20kHz but most musical instruments don’t explore this full range, and most human ears can’t hear this full range.  A sound with a lot of vibrations sounds high, and a note with fewer vibrations sounds low.  Here’s a sweep through the frequency of human hearing, you can test your own hearing using it, but keep in mind that if you’re playing this video on shitty speakers (such as those on a laptop or cheap earbuds), the speakers themselves may not be able to reproduce all the frequencies, especially those at the very bottom and top.

So now we know what makes a note sound higher or lower, but what makes the same note sung by two different people or played on two different instruments sound “different” even though they’re “the same”?  The specific quality of what makes a note on one instrument (like a flute) sound different to the same note on another instrument (like a violin) is called timbre (which is actually pronounced “TAM-ber”, not “TIM-ber”).  You probably already knew this, but what you might know less about is how timbre actually works, and for this we have to look at more science.

The tone in the video above is what’s called a “sine tone” or “sine wave”, which will be a familiar term to all you sciency folks.  This is a completely “pure tone”, this means that it’s just one vibration of a frequency with no extra added stuff (unlike those bullshit 432Hz relaxation videos where they whack an entire tune over the tone to make you think you’re getting in touch with your “healing power” or whatever dicksucking bullshit).  Here’s another one of these fun sine waves.

The YouTube description for this video here says that you can use it “for meditation, DNA repair, (and) DNA activation”.  Do you feel your DNA getting activated as you listen?  Settle down Armys, we’re talking about a person’s actual DNA here, not some mediocre song that sounds not a bit different to every other boy group k-pop song released in the last 5 years.  Anyway maybe you can feel the magic of your DNA strands repairing or maybe you can’t, but what you probably are almost certainly feeling after a fairly short listen to the above tone is a very strong sense of boredom.  Sine tones aren’t actually all that interesting to listen to on their own (without drugs) and part of the reason why (besides the obvious reason that not many humans besides the aforementioned BTS fans really want to listen to ten hours of anything) is because the vast majority of sounds produced by vocals, instruments and nature don’t sound anything like a sine tone, because they’re not made up of one vibration, but multiple complex vibrations working together.

The first person to discover that sound might consist of more than one vibration was probably* an ancient Greek guy called Pythagoras, also known as “that triangle guy” to any high school maths student.  Yes Pythagoras liked triangles a lot, maybe too much, but he liked music even more, and he actually had his own religious cult devoted to the study of music, and maybe triangles, and perhaps some other things too.  Pythagoras noticed that when a string was strung between two points, and was then plucked, a sound was produced, but that placing a finger gently at various places along the string while it was vibrating, or while a vibration was being triggered, did not stop the vibration completely but only partially, leaving a higher note or “overtone” to ring out.

These tones correspond to particular musical pitches, as follows:

This correlation is not coincidental, and I’ll get into how and why harmonics correlate to musical pitches in a future post, it’s a bit off-topic where discussion of timbre is concerned.  Tibre is concerned with the sound characteristic of an instrument, and if you’d like to know what these overtones sounds like on an instrument, watch the following video at 3:24:

The guitar has a very strong-sounding second harmonic, this can be heard in the video, where the note produced at the 12th fret is reasonably loud.  The notes produced later in the video at the 7th fret (3rd harmonic) and 5th fret (4th harmonic) are quieter.  However when a guitar string is played, ALL of these vibrations are in fact heard together, and it’s this specific combination of vibrations which is part of what makes the guitar sound “like a guitar” to the human ear.  If you removed every single vibration from the string except the “fundamental” pitch, then the guitar would sound exactly like a sine wave, which is a fundamental pitch only.

Here’s a video of someone playing harmonics on the flute, starting with the fundamental, then the 2nd, 3rd, 4th harmonic and so on.  What you’ll notice with both methods shown (“overblowing” vs “mouth shaping”) is that the 3rd and 4th harmonics are particularly strong-sounding, while the 2nd harmonic is less so, but ALL the first few harmonics when isolated are at least as loud as the fundamental pitch, whereas with the guitar the fundamental pitch is much louder than the harmonics.  (This is not to do with the extra force required for “overblowing” as he achieves the same result with the second example – it’s because of the way harmonic reinforcement works inside open tubes, which is quite a rabbit hole to go down, here’s some further technical reading if you dare.)  These differences in volume of the harmonic content are part of what makes the flute sound “like a flute” and not “like a guitar”, as in both cases the harmonics are present even when the “regular” notes are being played, and colour the listener’s impression of the overall sound of the instrument.

The human vocal cords are also a musical instrument, and it’s also possible to isolate and emphasise harmonic overtones purely with the voice, which is done in the traditional vocal music of Tuva and Mongolia.

The singer here sounds like he’s singing two notes at once, but what he’s actually doing is keeping the fundamental pitch static, which changing between emphasising different harmonic overtones to create the “melody” on top of the fundamental “drone note”.  In a sense, yes it is more than one note at the same time, but then so is the singing of every singer, just in the same sense that a fundamental pitch of a guitar or a flute is also a collection of the fundamental pitch plus overtone notes.  Even human speech isn’t exempt – all that speech really is, is the manipulation of overtones to create different sound timbres that convey different meanings.  The Tuvan singer is just controlling which overtones have more emphasis in an unusually steady and precise manner which makes these tones quite obvious to the ear – but similar, less obvious overtones are also present in any other singing, such as this:

No wonder T-ara are so big in Mongolia.  Makes sense now, right?

Of course, harmonics isn’t the full story, when assessing what makes an instrument sound the way that it does, we also have to consider the aspect of timbre over time.  All notes have a start and a finish, and depending on the process of creating a note, the timbre of the note may change during this process.  For instance a note on the guitar starts with a loud attacking sound as the guitar plectrum or finger hits the string, then the note settles on a softer vibration which gradually decays over time as that vibration loses energy and becomes gradually weaker.  The sound of the guitar may also change depending on if the player is using a plectrum or just bare fingers (which will slightly change the amount of harmonic overtone content produced, a plectrum tends to have a brighter sound which means more high harmonic content), as well as exactly how the notes are being played, there are several techniques that manipulate the timbre of the notes and the way they are shaped (such as palm muting, scraping the strings, the adding of distortion and feedback to electric guitar, etc).  In addition, the guitar is a polyphonic instrument – it can play more than one note at a time, and the sound of different notes on the same instrument interacting is a key part of the guitar’s sound.  On the other hand a flute has arguably less types of playing techniques traditionally available, and is also a monophonic instrument, but can have a much more varied progression of any single note over time as the player can choose to start the note softly or sharply, with a variety of blowing techniques, and sustain for as long as their breath can manage, there’s no default “hard attack, then decay” waveform shape such as with the guitar, that the flute player is constrained by due to the physics of note generation on the instrument.  Also, as the flute is interacting closely with the breath of the player, this “breathiness” also forms a characteristic part of the instrument’s sound.

With the advent of music synthesis, no limitations exist, at least theoretically.  Every combination of fundamental frequency and harmonics is possible, as well as every combination of amplitude and harmonic variance over time, all of these can be programmed.  However it’s often the objective of synthesis to simulate “natural” sound to some degree, so synthesisers are almost always programmed with facilities to mimic the expression of sounds of other instruments as well as sounds in the natural world.  As it happens, programming harmonic content is relatively easy compared to dealing with the “time factor”.

The ADSR waveform model shown above is an early modelling tool used by synthesisers to simulate different types of notes and sounds, and it still forms the basis of a lot of music synthesis today.  This model works well for mimicking a very simple sound such as a piano, where a specific note is hit and then decays quickly, but also experiences a moment of sustain before the key is eventually released.  However this model isn’t as useful for flutes, clarinets and other instruments that don’t have such a rigid relationship between time and volume.  It’s incredibly tricky in particular to synthesise a violin, due to the physically complex action of bowing and fretting the instrument – there’s an almost infinite variety of ways in which a note can be manipulated by the player, and a (good) violinist has very precise control over every moment of each note.  The timbre of the violin can’t really be removed from this quality of extremely minute variance that alters both volume over time, and harmonic content over time, and as a result, even with the best modern technology no synthesiser has ever been able to quite capture the unique timbre that a professional violinist can bring to their instrument.

They’re getting closer lately, but you can probably tell the real thing from the synthesised version fairly easily if you listen closely.  I’m sure that you all prefer the real deal.  I’m going to leave you with a violin player that I found while writing this article.  It’s one thing to play the violin well, it’s quite another to also dance and meet required standards, and also not accidentally poke out the eyes of your backup dancers with the bow!


That’s all for this post!  Kpopalypse will return with more of the music theory series at a later date!

* or possibly Boram.  Will cover in a future episode of this series, if I remember.

One thought on “Kpopalypse’s music theory class for dumbass k-pop fans: part 7 – timbre, natural and synthesised

Leave a reply, cao ni ma

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.