Welcome to the second part of Kpopalypse’s music theory class!
So now that we’ve defined music theory in the first episode of this series, we have a larger problem to contend with, which is that we can’t discuss a theory of a thing if we don’t actually have a clear sense of what that thing is. So let’s define that thing – how do we define music itself? I’ll try to do this initially without getting too technical (at first), and note that this post will cover and consolidate some information that has previous been presented in various episodes of the Qrimole questions-and-answer series where I’ve been asked the “what is music” question and others similar to it.
So what is music, then? A quick Google search comes up with this:
The search “what is music” currently brings back 12.5 billion results, and this is the first one. There’s an obvious problem with this kind of definition, which is wrong on three different counts:
- At the time of writing, two thousand people enjoyed Rui’s “I Don’t Care” enough to click “like” on the YouTube video, meanwhile (at current count) 283 people clicked “dislike”, 284 if you count the song’s inclusion at #1 in Kpopalypse’s worst songs of 2018 as a bonus dislike. This discrepancy clearly indicates that “beauty of form” is a subjective determination, which is a big problem if we’re using that term to define what something is or is not, in objective terms. How do we agree on what possesses “beauty” or does not?
- If music must produce “harmony” to be music, this is an issue, because “harmony” is the act of more than one musical frequency being played together. Does this mean that instruments without a specific pitch frequency, such as most drums, are not in fact music instruments? Does it also mean that music instruments that are monophonic (i.e that can only produce one note at a time) such as most wind and brass instruments, are not considered musical instruments unless they are grouped together with other instruments to make the extra notes so they can harmonise?
- The existence of elevator music, on-hold music and “It G Ma” all confirm that it is possible to produce music completely devoid of the expression of any emotion whatsoever (unless you count “being on drugs” as an emotion). However all these three things are still broadly considered a form of “music”.
Most definitions of music that I have read run into problems similar to these, by equating a “qualitative” or “value judgement” aspect to music which is in fact subjective and often culturally determined as well as further determined by the individual. An anecdote which one of my university lecturers shared with me was that there was a British music professor who decided to take a music playback device deep into the Amazon rainforest and play the works of Mozart and Beethoven to a remote tribe who had their own tribal music, but who had never heard any western music of any kind. I’m not exactly sure what the professor was trying to achieve by this particular feat of musical outreach, presumably he thought it would “culture the savages” or something, but the result wasn’t what he expected anyway – when he fired up the playback device, the individuals in the tribe all ran away, terrified. After turning the machine off, and waiting for the tribe to return from out of the bushes where they were hiding, they told him that they thought his device was a portal to hell and that they were listening to the sounds of the devil (and they didn’t mean it in a complimentary “first two Deicide albums” kind of way). Clearly, to the professor, the recordings were music, but to the people in the Amazon tribe, they were not. A re-enactment of this historical event is in the below video, I think. (I didn’t watch it all the way through, I was a bit scared of listening to those gates of hell for too long.)
Then there’s the issue that not all music is supposed to be an “easy listen” anyway. As soon as you start saying things like “music has to be harmonious” or “music must be pleasing to the ear” then you’re leaving out all the music which clearly isn’t harmonious whatsoever and isn’t trying to be. Even deliberately abrasive music is still music of a sort. The issue then becomes where do you draw the line with how pleasant something has to be to actually qualify as music, and then you’re back to subjective interpretations. One person might draw the line at “wolf nega wolf awoooooooo“, another might draw it at Merzbow, but the problem remains no matter where the line is, some type of authority has to determine who is correct about any of that, and which outside authority are you going to trust with that job, not Kpopalypse, I hope. If you say what I think most people would say – “I only really trust myself to make that determination” – then everyone is their own judge with different opinions and we’re right back to subjectivity again and have solved nothing.
The only way that I can think of to come up with a coherent definition of music is therefore to ignore any kind of “quality” aspect and focus on the actual “framing” of the music. The following paraphrases Frank Zappa’s theory on the definition of music from The Real Frank Zappa Book, which is as close as I’ve seen to anything concerning a musical definition that I can agree with. The concept is similar to defining art – if you throw a tomato against the wall, the red splat you generate is not art, it’s just a mess and when the art gallery is closed at 2am the night janitor will presumably clean that up. However if you throw a tomato against the wall and then put a frame around that red splat, then the janitor knows not to clean that bit, because there is now a defined area where “art” resides, the janitor now knows that inside that frame is a designated artistic piece of some kind. He might not like the artistic piece that’s in there, that’s a subjective determination which is completely up to him, but he’s just the cleaner and not some poncy art critic being paid to write the art column for Billboard while throwing in as many references to BTS as possible to generate web traffic, so his opinion is not important anyway. However what he definitely absolutely knows is that if he puts a mop and bucket into that framed area and wipes up the mess, he’ll be in a lot of trouble the next morning.
With music the same concept can apply, although the “frame” is not a “space frame” but a “time frame”. If you want to be a music composer, you declare you intention to create a piece of music at a certain time, then within that “time frame” you can do or not do something that makes a noise of some description. Then at a certain point you end that time frame (or you could keep it going, declaring it a “work in progress”). That’s why John Cage’s “4:33” is actually still considered music by many – it may not have any deliberate sounds by the performers, but it’s an excellent demonstration of the “time frame”. As the frame is designated to be empty, the music then (initially) becomes whatever ambient sounds find themselves inadvertently within it during the piece’s performance. This then of course makes both the performers and the audience of the piece very self-conscious, and this changes how they behave and the types of sounds they make, changing the audible result in the process, which now demands great attention – essentially, in 4:33 John Cage is playing the people in the room.
This framing of a piece of music doesn’t have to be made by the composer themselves. Listeners can also create their own frames. One person may walk onto a busy factory floor and hear a bunch of annoying machines, just noise, certainly not music. A second person might walk onto that same factory floor and notice the different rhythms generated by the different mechanical processes, and perceive a “musical experience” – they are creating their own frame. Lars Von Trier demonstrates exactly this in “Dancer In The Dark”: Bjork’s daydreaming character chooses to frame her audible experience in the factory as music, but to her more practical-minded co-worker played by Catherine Deneuve, the machines are just noise.
With the exception of “4:33” (and any plagiarised versions thereof), all music contains some form of deliberate sound. So what is sound?
Sound is molecular vibration within a certain frequency range that is received by the human (or animal) ear, and all vibrations (sound or otherwise) are measured in the unit Hertz (Hz – which is pronounced “Hurts“). The “being received by an ear” aspect is important, hence the old philosophic quandary “if a tree falls in the forest, and nobody is around to hear it, does it make a sound?”, or the k-pop fan equivalent “if you buy some bullshit album just so you can drool over the photos in the booklet but you never play the CD because you already illegally downloaded all the music files weeks before it arrived in the mail, could you even say that you are buying music?”. So let’s finish this post up by talking about human hearing.
The common limits for human hearing are from 20Hz (twenty vibrations per second) to 20000 Hz (20KHz – twenty thousand vibrations per second). These limits are not absolute but they can be considered the “maximum” – people’s hearing degrades over time as they age, with the hearing being lost from the high end, so a child can hear 17Khz whereas their parents probably can’t. Here’s a fun demonstration of this using ringtones that vibrate at 8KHz, 12KHZ, 17KHZ, 16KHz and 22KHz. Animals also have slightly different hearing.
You can test the exact limits of your own hearing with the following. Note that if you don’t have very good speakers connected to your device, they may not be able to reproduce tones all the way down to the lowest of 20Hz, so don’t be too surprised if you hear silence for the first part of this video.
All musical instruments have one thing in common, which is that they produce vibrations like these, that wobble the molecules around them (usually air molecules, but all types of molecules can vibrate). A speaker system also produces vibrations, it’s literally a circular bit of material in a box, that is designed to reproduce vibrations and wobble molecules in the most effective manner possible. As 20Hz is the lower limit of human hearing, anything that can wobble molecules at over 20 wobbles per second can therefore produce its own unique sound, that’s why hummingbirds hum when they fly but other types of birds with more chilled-out wing motion do not. If you could move your own arms at over 20 wobbles per second, you could make a unique sound with them, but you probably can’t do this. However your vocal cords can in fact vibrate quickly so humans are able to produce sound through the mouth instead by passing air over the vocal cords, which is what both speech and singing is (and there both begins and ends this discussion of singing technique in this series).
The human ear hears all these wobbles by having an eardrum, which pick up these molecular vibrations in the air (or in the water, if you’re underwater, but sound doesn’t do quite as well through water when it hits your ear because an eardrum doesn’t vibrate as easily when it’s wet). Then those vibrations get sent to hairs in your inner ear, then they go into the brain and… well, who knows what the fuck happens after that, nobody really knows exactly, and if they tell you that they do, they’re lying and are probably some dirty hippies and not neuroscientists or anything. I’ve kind of oversimplified this explanation, and I was going to put a diagram of the human ear here to explain it better, but all those diagrams look kind of ugly and who wants to see an ear. So here’s a picture of Go Won instead, she has ears so it’s kind of relevant.
That’s all for this post! I’m trying to keep these posts short so they’re easily digestible and don’t get too drawn out and boring so you can go back to stanning Loona quickly. The next one will appear soon!