There’s a lot on the Internet written about Autotune* and its effects on pop music. Pity almost none of it is factual or even makes any sense. I thought it would be interesting as both a kpop fan and a qualified audio engineer to weigh in on the topic of Autotune for the benefit of you folks reading, because if there’s any group of people out there who don’t understand jack shit about Autotune, it’s k-pop fans. I’m sorry for all of you who come here for the pictures of boobs but this is going to be one of those boring educational posts where you probably don’t need your screen cleaner and wet wipes for a change but you actually might learn some shit.
For both of you still reading, if nothing else, you probably know these two things about Autotune:
1. Autotune is that robot voice thing that works by moving a sung note to the nearest correct note
2. Cher’s “Believe” was the first popular song with Autotune in it.
Of course, you’d be only partially correct with the first point, and completely wrong with the second (yes, the Wiki is wrong).
Let’s tackle the first point first. Autotune does indeed move your sung note to the nearest correct note. However, what can be varied by the music engineer is the speed and precision of this movement. The “robot sound” that we all associate with Autotune is what we hear when this process happens immediately and the sung note is instantly moved to exactly the correct pitch, this setting is called “zero retune speed” (retune speed being the delay that occurs between the singer hitting the shit note and the program dragging it out of the toilet and into the vicinity of where it should be). However, you don’t have to set it that way, and if you were trying to cover up a shit vocal performance, why would you? If you had an absolutely fucking crap singer on your hands like [insert your bias here], it would make more sense to be more subtle. If you want to fool someone into thinking [insert your bias here] is a great singer, well, if they hear that “robot voice”, they’ll know the jig is up, right? Better for them to think that they’re hearing something “natural”, and if you’re trying to get it to sound “fixed but natural”, too much perfection is a bad thing because it betrays the machine at work. So with Autotune, you can direct the program along the lines of “with an attack time of 150ms move the incorrect note 90% closer to the original pitch with a simulated vibrato variance of 3% at the attack of the note and 5% once the input level drops below a -5dB threshold”. All of a sudden, [insert your bias here] can “actually sing, like, for realz, yo, omg, like, no Autotune or anything, they must be like, SOOOO TALENTED”. Don’t believe that Autotune can work like this? Check out the official page for Antares’ Autotune which breaks down the key features in their latest version of the software. (There’s also plenty of YouTube Autotune tutorials that demonstrate various facets of this but I won’t link any because that would be fucking boring.) Bottom line – you don’t spend precious software development time improving on features like a “humanize function” and “realtime natural vibrato adjustment” if nobody is using them.
So, why do we still hear the robot voice? It’s an aesthetic choice. Someone thought it sounded “cool” to do that. It’s no different to a guitarist stepping on a phaser pedal because they think it “sounds cool”. You may or may not like the sound of a phaser pedal on a guitar just like you may or may not like the sound of Autotune’s zero retune speed digital snap, but if you’re hearing that noise, it’s because the producer wanted you to hear it. It’s when you DON’T hear Autotune that you should be more worried about Autotune being used specifically to cover up shitty out-of-tune vocals. This segues nicely into our second point, which is that Cher’s recording engineers weren’t the first people to use Autotune to fix up some bum notes, but they were the first to set the retune dial to zero (probably by accident while trying to fix some of Cher’s notoriously limp singing) and go “hey, we actually LIKE the sound of that, let’s put it in the final mix that way”. Then they lied about what effect they were using in the hope that their use of the pitch-corrector would remain a music industry trade secret. Why would they lie? Because they were likely using more subtle edits with Autotune to fix shitty vocalists’ bum notes probably for a long time before they worked with Cher, and didn’t want music fans to know that they could do that. So what WAS the first recording to use Autotune? We’ll never know, and that’s exactly my point.
Autotune is like Photoshop’s image-editing facilities, but for the voice. It’s similar in three key ways:
1. It “fixes” shit
2. It’s in everything and I mean EVERYTHING
3. Sometimes it’s deliberately obvious, sometimes it’s accidentally obvious, but when a really skilled practicioner is using Autotune or something like it to hide something, even an expert can’t tell
Watch the following video, and then listen to some of your favourite k-pop songs again.
Someone who only associates Autotune with the robotic-sounding “zero retune speed” setting could be forgiven for thinking that Autotune has somewhat fallen out of vogue in k-pop in recent years, because there are less new releases that feature its signature mechanical tone-snapping oscillations. This would be incorrect: only “zero retune speed Autotune robot sass” has fallen out of vogue – Autotune as a subtle pitch-corrector that fixes fuckups and makes your bias sound like they know what they’re doing when they really don’t is now more prevalent than ever before. Professional photographers working with models will routinely run ALL their images through Photoshop and make adjustments, it’s become a standard tool of the trade, and the same applies to Autotune and the music business now. Every vocal track by every artist with any kind of budget behind them is run through the magic fix-it box. Only independent artists, artists with a bee up their ass about Autotune (plus the power to make the engineers listen) or artists working in styles where precise vocal pitching isn’t required (rap, punk, death metal) wouldn’t use it (although even in these fields some of them do anyway). Combine this with k-pop’s obsession with making as “perfect” a product as possible and it’s pretty safe to say that there isn’t a single k-pop album in your collection that doesn’t have Autotune smothered all over it like k-netizen’s cum over a computer monitor showing Dal Shabet’s “Be Ambitious” MV. Artists in the pop field generally won’t say no to a bit of subtle non-detectable Autotune on their voice for the same reason that models won’t object to a Photoshopper making them look just that little bit skinnier and more toned.
Oh, and because the effects can be made to work in real time audio engineers can trigger them in live performances too. Without you even fucking knowing. So you can bash all those “idol vocals” threads on Allkpop and Onehallyu forums straight up your ass, because none of that shit really matters a goddamn.
Another thing to remember is that before Autotune there was a thing called the Vocoder which has been around since the 1970s, also pitch-corrects vocals and sounds exactly the same as Autotune’s “robot voice” if used in the same way. A Vocoder works slightly differently however, rather than adjusting your pitch in real-time to a pitch assigned by the software itself, it adjusts your pitch in real-time to a pitch assigned by another musical instrument (usually a keyboard). This allows a singer to be able to sing ANY note on a keyboard, even notes outside of their vocal range, and even chords. And it sounds just as robotic-as-fuck as Autotune does, so it’s easy for the untrained ear to confuse the two. Vocoder is what Kraftwerk, Daft Punk, and J-poppers Perfume use in all their shit, but if you want a k-pop example, here you go:
Programming all those vocal slides and chords would be a pain in the ass with Autotune (but not impossible) however very easy with Vocoder – you just get a keyboardist to plug in and play that stuff and sync the vocals to it – it would take as long to do as the song takes to listen to.
Points to take away from this post:
* 4Minute’s “What’s Your Name?” isn’t a shit song because it has Autotune. It’s a shit song because Brave Brothers thought that getting Hyuna’s “Ice Cream” and stripping away all the melody and everything else that made that song decent and replacing it all with computer fart noises was a good idea. That’s a separate issue to Autotune because you can actually get all those exact same noises with a Vocoder if you wanted.
* If your bias is on the commercial end of k-pop in 2013, your bias uses Autotune, or something like it. Period. No ifs, ands, or buts.
* People in the industry laugh at what fans and singers alike think they know about vocal production/staging.
But Autotune isn’t quite perfect yet. It still can’t fix up Bom.
Maybe in 20 years or so technology will have advanced and we’ll get computer software that can make Bom’s voice good enough to the point where she doesn’t have to blow out an entire GD&TOP studio session. We’ll probably have a fix for global warming, overpopulation and the bees-mysteriously-dying-out thing by then, too.
* When I say “Autotune” you can assume that by this I mean “Autotune plus other pitch-correcting software that also acts like Autotune”. I know that if I don’t put this disclaimer here some smartass cunt will go “but what about [pitch corrector x nobody has heard of]” or some shit. Despite what the guy says in the WavesTune video above, It’s really all the same shit and it all does the same job.