A while I made a post that discussed various technical aspects of vocals from an audio engineering rather than a singing point of view, because inquiring k-pop loving minds wanted to know all about vocals and there really aren’t any other posts that I’ve seen out there in the k-pop fan’s world that tackle vocals from any point of view other than either a fan’s or a singing teacher’s perspective. In the vocal post I discussed the technical ins and outs, and I also asked if people were interested in a similar post about the backing tracks and instrumentals of k-pop. As it happens, some of you said that you would like a post like that, so here it is. Be careful what you wish for, hey.
Some of you bright sparks out there have been noticing that there’s a bit of a “real instruments” trend seeping through Korean pop music lately – obviously synthesized electronic dance music is starting to take a backseat to pop music with real drums, guitars, brass, strings and keys. Clearly this is just another cyclical change in music fashion and it’s probably a reactionary trend to the proliferation of dubstep breaks shoehorned awkwardly into every second upbeat k-pop song released over the last two years whether the dubstep material suited the song or not (thanks “Bubble Pop” for launching that shithouse trend – not). So the question this blog poses is not “are real instruments good or bad?” (nobody cares, you dimwit) or “will the proliferation of real instruments last?” (no it won’t) but “how much of these so-called “real instruments” are actually ‘real’ and not synthesized or machine-generated”?
If the title of this blog post didn’t already give it away, the short answer is “not a fucking lot”. Okay, all you people who complain about my posts being condescending or having really obvious information in them or me having a snarky tone or being a cunt or whatever else can fuck off now. Yay! For the rest of you still reading – nice to have you on board, ladies and gents.
A SHORT AND HOPEFULLY NOT TOO BORING HISTORY OF POP MUSIC RECORDING
In the early days, music recording was monophonic, or mono for short, which means that everything was recorded onto one track of big-ass magnetic tape (or if we’re going back reeeeeally far, a wax cylinder), and then vinyl records were cut from this. The recording was made very simply – the engineer would stick a single microphone into a room, say “okay guys and gals, hit it”, the entire ensemble would play and sing at once, and it would be recorded through the microphone onto the tape machine. Mixing elements was done by physical distance – how close you got to stand and sing or play next to the microphone meant how important you were to the song and how loud your input would end up in the final mix. Obviously drums, brass and other loud instruments that could potentially dominate the mix were recorded very far away, whereas vocals would be relatively close, so the final result is that (hopefully) you get a nice “balance” and everything can be heard “just right”. In practice it didn’t always work out this way, usually due to money – too much or too little echo could easily ruin a recording by making the individual elements harder to balance, and finding a room that sounded “just right” when a whole band played in it was difficult. Recording studios that had good-sounding rooms which were able to reliably produce hit records were acutely aware of this problem and charged a premium for their services.
Then something happened – some wise-ass inventor and musician by the name of Les Paul thought up a neat trick called multitrack recording. He figured out that magnetic tape could be divided into vertical sections and you could record different things onto each section, and then combine these sections later onto another tape for a final mix. What this meant in real terms is that the volume of each individual instrument could be individually adjusted before the final mix was made, so you could always achieve that perfect balance between volumes even if you recorded it under less-than-ideal conditions. Les Paul got crazy fine pop idol pussy over this radical invention (as well as a few other innovations, like the famous Gibson Les Paul solidbody guitar) and produced several hit recordings with singer Mary Ford using his fancy multitrack studio which was cutting-edge technology at the time. The recording industry were slow adopters of this new technology, gradually progressing from mono to stereo (two tracks, one for each ear, but essentially the same procedure) and then finally by about the late 1960s multitrack recording had finally gained enough popularity to become the industry standard practice. Only classical music still records similar to the old way with a stereo microphone over the conductor’s head picking up the entire room (which is why the symphony orchestra still has a particular seating arrangement).
The interesting thing about multitrack recording that is often neglected by people who prefer “real music” over “that EDM rubbish”, is that tracking each instrument separately creates a completely artificial acoustic construct. Multitrack recording means that instead of recording for example a whole drum kit by hanging a microphone over the top of it and hearing it the way your ears naturally would, you can record each drum in the kit individually – you can put a microphone right up an inch away from the snare drum, another one right up to (or even inside) the big bass drum, another one up to each individual cymbal and tom, and so on. Then you can mix and balance it all to your heart’s content (or at least until you run out of expensive studio time) until it sounds great, and it probably will sound great but what this method will never sound like is the natural real sound of a drumkit. There’s simply no way that you can stand in a room and hear a drumkit that way, because you don’t have twelve different ears that can all go right up to a drum skin at the same time (and if you did, listening to a drummer would send you deaf really fuckin’ quickly). This practice is called “close-micing” and most instruments in a multitrack recording have been recorded “close-miced” for the past 40 years. The point being, you can forget about “hearing the real sound” – to hear recordings that have a “real” acoustic perspective you have to go back to the 1950s, and you wouldn’t want to anyway – what qualified as an acceptable instrument sound on a 1950s recording sounds laughable and amateurish by today’s standards. In the realm of pop music, the artificial version is what today’s listeners prefer.
Of course, this is just the tip of the iceberg. Let’s now go deeper and have a look at some individual instruments, and some of the common ways that their sounds are “created”.
Back to drums again and it’s common knowledge these days that most drums in k-pop are generated by a drum machine, playing stored drum samples. What you may not know is that most live drumming on recordings, when it appears, is also machine-generated.
Okay, let me explain. Say you’re an audio engineer and you’re sitting in a studio trying to get “that perfect sound” out of a drumkit, but no matter what you do, it still sounds like absolute wretched anus. Your client is a moron, his $100 drumkit is a rattling, squeaking out-of-tune piece of shit, but you can’t afford to annoy him because he’s the one paying the bills for the studio session and he’s emotionally attached to his drum kit like a crippled infant to a security blanket and insists that it is “the best shit ever, man”. Your client only has so much money so you really are under pressure to not go overtime and pull this great drum sound out of your ass. If you don’t deliver the sonic goods as fast as possible, you know for sure that he’ll blame you because according to him his kit is so good that any shortfall in sound quality must therefore be your fault… but with his Fischer-Price hunk of junk there’s just no way it’s going to happen. The way you see it, you’ve got two choices:
1. Keep moving microphones around, adjusting the drum kit and the room environment until the drumkit sounds great, which could take any amount of time.
2. Put some drum triggers on the kit and use them to trigger drum samples.
Easy choice. You attach some little orange clips to his drumkit – these are your drum triggers. “What the fuck are these things, man?” your idiot client asks while drooling and gently scraping his knuckles along the studio carpet. You spin some bullshit story about them being “rim stabilisers”, and you also leave all the drum microphones set up as well so he doesn’t get wise. Now every time the drummer hits a drum skin, these triggers send a little “go” signal to a big box called “my kickass drum sounds 101” full of pre-recorded drum samples of every type of popular drum kit sound, all perfectly in tune and recorded with brilliant clarity in a studio much more expensive than yours somewhere in the USA or Germany or wherever the fuck. When your drummer hits that snare drum, the drum box plays a recording of a snare drum from some snare drum sample library, and that is what gets recorded. What if you don’t like the sound? Fine – ask the drum module to instead play one of the other 199 snare drum sounds in its internal library until you hear one that you like. Once your recording is done, you invite your client into the control room to listen and you play him the finished product. He smiles and says “I told you my drumkit fuckin’ kicked ass, man!” – little does he know that he isn’t even listening to his own kit, he’s hearing himself playing samples of a much better kit. In the meantime, you take his money and usher him out the door.
These days drum machine technology is excellent and can sound exactly like a real drummer with no problems, but drum machines have a disadvantage – they only play in repetitive patterns, and sometimes the natural variance of a real drummer (or the desire to fool dumb k-pop fans into thinking they’re hearing “real music”, whatever that means to their pea-brains) is desired. Sure, you can make a drum machine sound like a real drummer if you really want to by programming it really fastidiously instead of using repetitive loops, but who’s got time for that? Get in a professional drummer but make them use a triggered drum kit and you’ve saved yourself the time of trying to get a good drum sound AND the time that you may have spent programming the drum machine to sound “more human”. It’s a lot quicker to trigger proven sounds that you know are going to work, than to take a chance by trying to record the natural sound of drums that may or may not sound any good. Triggers are used all over pop recordings these days because it’s a time-saver, and in a recording studio, time is money (literally).
Using samples has another advantage – volumes are more consistent, reducing the need for compression (an effect which is explained in the vocal production post). Those of you who listen to that extreme metal stuff with the double-kick drums going at light speed might be interested to know that usually the volumes between the two kick pedals are evened-out electronically. I was in on a studio session once where the drum player’s kick drum playing was so inconsistent and unrecordable that the engineer assigned him the ultimate drummer humiliation – he was made to redo all his fast double-kick work again by repeatedly tapping on a sample keyboard with two fingers. I’ve never seen a more embarrassed drummer in my life than at that moment.
Oh, and another thing – drums, like any other instrument, can be Auto-tuned. Remember Brad from Busker Busker’s controversial interview where he said that the producers of their cover of SHINee’s “Juliette” had to “Autotune everything, even the drums”? He’s not making that shit up.
Bass guitar is one of those instruments that is synthesized an awful lot these days, and the reason why is fairly straightforward – electronic synths can actually go a lot lower than a bass guitar. Sure, you could use an upright bass (also known as a double bass) instead which is a full octave lower, but there’s a trade-off – an upright bass doesn’t have all that much sustain, unless you play it with a bow classical-style and then the sound has all the sustain you want, but doesn’t have much punch. To get the deep, thudding, punchy, sustaining, subwoofer-friendly bass frequencies that modern pop music listeners like to dance and use drugs to, you need machines. There’s three common ways to synthesise bass on a recording:
- Make a bassy noise with a synthesizer and program it into your track or play it in real time
- Use a tone-generator and a gate to trigger bass to the peak of another instrument
- Synthesize sub-bass from a live instrument by sampling it and then pitch-altering the sample
The first point is self explanatory but the other two may not be, so here comes the technical fun.
A tone generator is basically just a box that makes a sound, you can buy them or you can download them for free, they’re electronically ultra-simple and you can even buy them in kit form from electronic hobby shops and assemble one yourself. A gate is an audio signal device that will either let a sound through it, or not let a sound through it, depending on input. If you set a fat bassy tone to go through the gate, but keep the gate closed until it gets a signal from a bass drum, every time the drummer hits their bass drum they’ll also open the gate and let through the tone. Here’s the concept explained in a diagram.
The result: combining the two signals gives an instant thick-sounding bass drum with a nice sub-bass underneath. This technique is called “gate side-chaining” which sounds a little kinky, because audio engineers like to tell themselves they’re doing something sexy when they’re really just being ultra-nerds fucking around with machines at ungodly hours of the morning when everyone else is listening to the fruits of their sonic labour in nightclubs, partying and getting laid.
The other trick, sub-bass synthesis, isn’t so complex. Just feed your bass signal into something like an octave divider pedal that makes everything lower, then carefully recombine that signal with the original signal. There are other techniques too but I won’t go into them here because I’m too lazy, this post is fucking long enough and I want to get to the next picture of a cute AOA member just as much as you do. The bottom line – the bass that you are hearing on a k-pop record is usually just a synth but in the rare cases where it’s not, it’s usually been juiced up in some way by added synth elements such as these.
It’s easy to sample and play a guitar sound, but completely synthesized guitar is still relatively rare, because guitar is quite a difficult instrument to synthesize convincingly. A notable example of completely synthesized guitar is T-ara’s “Cry Cry” where all the flamenco-esque guitar parts are (fairly obviously) produced by a keyboard. This is unusual – most guitar playing in k-pop is “real” – but only up to a point. Let’s look at electric guitar first.
Professional electric guitarists are an unusual breed of players in that they almost all hate the natural sound of their own instrument and don’t consider it good enough for a recording or a live stage! Most electric guitarists are absolutely in love with signal processing, to the point where guitarists working in the pop and rock fields sport extensive pedal boards full of signal processing effects that they lay at their feet on live stages and step on during songs to trigger and alter sounds. Here’s a pedal board of a well-known professional guitarist (guitar nerd points for you if you know which one):
To explain exactly what this board does, and the different varieties of guitar processing in general and what they do, would require another blog all of its own, but this level of effects processing on a guitar signal is by no means unusual (the amount of times I’ve had to patiently wait to get onto a stage while a previous band’s guitarist dismantled their crazily overblown effects setup, I couldn’t tell you). Of course not every effect is on at all times, guitarists will mix and match according to the song, essentially “playing” their effects like another instrument in itself. Digital sound alteration for guitarists is so common these days that they even have a special word for when they go without it: “clean”. It’s a very unusual guitarist who voluntarily steps on a live stage with “a clean sound”. And all this processing is before the signal gets into the mixing board in the studio!
I know what you’re thinking – “but my faves all play acoustic guitar!” They’re not exempt – most modern acoustic guitars sport hidden battery-powered electronic pickups and can be plugged into effects units just like an electric guitar can, so they can use all the same toys.
The little black control unit adjusts the volume and frequency response of the inbuilt pickup. Just saying this here because if I don’t, someone will ask me “what’s that fucking black shit dude”.
Acoustic guitars are actually very commonly post-processed on k-pop recordings as well. The current trend in k-pop is to gate acoustic guitars extremely heavily. Remember the gate that we talked about before? Well, it has another function. You can set a gate “threshold” so the gate opens up once a signal that passes through it reaches a certain volume level, and to close once it dips below the threshold. This is actually the more common use of an audio gate, and on a diagram it would look a bit like this:
The red sections of the signal get completely removed, leaving a sound which has no natural decay but just starts and stops very sharply, leaving an uncanny dead silence in between strums. Juniel and BTOB both have songs with heavily gated acoustic guitar (they use muting as well, but gates are used to “tidy up” any loose ends – k-pop’s perfection obsession at work) but they’re not the only ones, just two examples that spring readily to mind. The common 1980s “big drum” sound (popularised on Phil Collins’ hit “In The Air Tonight”) also uses this technique as do many other drum mixes from the period.
The other thing to keep in mind with all guitar parts in k-pop is that the guitar player usually didn’t play the whole thing as you hear it. If the song has two verses which are identical, the engineer will usually sample the first verse of guitar and then copy and paste the part over to the second verse so it sounds exactly the same… or vice versa if the guitarist happened to fuck up a bit less in the second verse than the first one. Guitar solos, when they appear in k-pop, are often also cut-and-paste collages of the best bits of multiple attempts at guitar solos, which is as easy for an audio engineer to create as for a writer to cut and paste pieces of a massive overlong boring essay together (like this one, for instance).
PIANOS, ORGANS, KEYBOARDS
Keyboard technology in 2014 is very kick-ass and newfangled keyboards are capable of pretty much any fucking thing. The modern keyboard is basically a computer and if you check the specifications of professional grade keyboards you’ll notice that they are not just instruments but sample generators, programmable machines and digital signal processing units all in one. This means that a keyboard can make any sound that any other instrument can make, plus a few more. Most importantly, many keyboards can be programmed, meaning that a keyboardist doesn’t actually need to play a keyboard in real time, they can press a few buttons and out comes “here’s one I prepared earlier” like in the cooking TV shows where they don’t want to make you wait 40 minutes while a pasta bake roasts in the oven. Combine keyboards with drum machines, MIDI (Musical Instrument Digital Interface, a computer language that allows instruments to talk to each other) and other sequencing tools, and they become even more powerful.
The modern keyboard is basically the musical portable magic-trick-box. Who remembers that scene in the film White: Melody Of Death (which I hope by now you’ve all seen) where the lead vocalist is worried about getting her money note right, so the producer has someone standing by in a booth off to the side of the stage who sings just that one note for her when the time comes? In reality, this wouldn’t happen: setting up a booth like that would be too logistically difficult and it would be far too easy to get caught. It would make much more sense to store a sample of the correctly-sung note on another instrument such as a keyboard and get someone to trigger it at the right time. Keyboards can be used to sample and play back all sorts of shit including…
Brass parts are nearly always keyboard samples. Sometimes a solo line may be recorded individually and concept albums like IU’s deliberately retro “Modern Times” have legit instruments but those big brass stabs in the more modern k-pop songs… almost always samples, played on keys. “Real instruments” my ass.
Same here. Nobody plays fucking woodwind instruments, get real. What do you think this is – it’s not 1850, you bitch.
THE VIOLIN FAMILY
Here’s an interesting one. A symphonic string sound is a very easy thing to synthesize and most electronic keyboards of any worth have a really good and convincing “orchestra strings” sound built into them from the factory. However, the sound of just one violin playing on its own is something that modern synthesizers can’t get right yet, the technology isn’t quite there. The reason for this is that violins are played with a bow and there’s so many factors involved in bowing an instrument that computer programs actually still have a hard time figuring it all out. With twenty violins playing at once, they all blend into a smooth mush which is easy to copy, but the somewhat harsh and highly variable sound of bowing means that if you hear a solo violin, it’s probably not synthesized (although it may still be a sample). Give the tech another 10 years at least to get it together, then we may start to hear a decent synthesized violin… but probably not. It may always be cheaper to give a real violin player a bag of heroin in exchange for cutting a cool violin solo on a recording than learning how to use some kick-ass violin simulator that only does the job about 80% right.
THE THRILLING CONCLUSION
If you have thoughts anything like this person:
…then you’re wrong. Never mind the whole debate about “what does ‘substance’ actually mean” or “why do we give a fuck about comparing pop music from different countries anyway” – the fact is that most of what has a “live band kind of sound” in k-pop is pretty much machine generated on every level that it possibly can be, so this argument falls on its ass right at the first hurdle. The modern k-pop “live” sound is actually a product of various technologies coming together to make that sound happen. It’s the same in western pop too, but it’s probably even more the case in k-pop where there is a “perfection” aesthetic and producers are generally a lot more conscious of smoothing over their product and leaving no rough edges behind. Welcome to the future of pop music! It’s the futuristic, forward-thinking musical elements of k-pop that attracted you all to the genre and this post in the first place, right? Right? Hey, where are you going…?