Does your bias use Autotune: or – does a bear shit in the woods?

There’s a lot on the Internet written about Autotune* and its effects on pop music.  Pity almost none of it is factual or even makes any sense.  I thought it would be interesting as both a kpop fan and a qualified audio engineer to weigh in on the topic of Autotune for the benefit of you folks reading, because if there’s any group of people out there who don’t understand jack shit about Autotune, it’s k-pop fans.  I’m sorry for all of you who come here for the pictures of boobs but this is going to be one of those boring educational posts where you probably don’t need your screen cleaner and wet wipes for a change but you actually might learn some shit.


For both of you still reading, if nothing else, you probably know these two things about Autotune:

1. Autotune is that robot voice thing that works by moving a sung note to the nearest correct note

2. Cher’s “Believe” was the first popular song with Autotune in it.

Of course, you’d be only partially correct with the first point, and completely wrong with the second (yes, the Wiki is wrong).

Let’s tackle the first point first.  Autotune does indeed move your sung note to the nearest correct note.  However, what can be varied by the music engineer is the speed and precision of this movement.  The “robot sound” that we all associate with Autotune is what we hear when this process happens immediately and the sung note is instantly moved to exactly the correct pitch, this setting is called “zero retune speed” (retune speed being the delay that occurs between the singer hitting the shit note and the program dragging it out of the toilet and into the vicinity of where it should be).  However, you don’t have to set it that way, and if you were trying to cover up a shit vocal performance, why would you?  If you had an absolutely fucking crap singer on your hands like [insert your bias here], it would make more sense to be more subtle.  If you want to fool someone into thinking [insert your bias here] is a great singer, well, if they hear that “robot voice”, they’ll know the jig is up, right?  Better for them to think that they’re hearing something “natural”, and if you’re trying to get it to sound “fixed but natural”, too much perfection is a bad thing because it betrays the machine at work.  So with Autotune, you can direct the program along the lines of “with an attack time of 150ms move the incorrect note 90% closer to the original pitch with a simulated vibrato variance of 3% at the attack of the note and 5% once the input level drops below a -5dB threshold”.  All of a sudden, [insert your bias here] can “actually sing, like, for realz, yo, omg, like, no Autotune or anything, they must be like, SOOOO TALENTED”.  Don’t believe that Autotune can work like this?  Check out the official page for Antares’ Autotune which breaks down the key features in their latest version of the software.  (There’s also plenty of YouTube Autotune tutorials that demonstrate various facets of this but I won’t link any because that would be fucking boring.)  Bottom line – you don’t spend precious software development time improving on features like a “humanize function” and “realtime natural vibrato adjustment” if nobody is using them.

So, why do we still hear the robot voice?  It’s an aesthetic choice.  Someone thought it sounded “cool” to do that.  It’s no different to a guitarist stepping on a phaser pedal because they think it “sounds cool”.  You may or may not like the sound of a phaser pedal on a guitar just like you may or may not like the sound of Autotune’s zero retune speed digital snap, but if you’re hearing that noise, it’s because the producer wanted you to hear it.  It’s when you DON’T hear Autotune that you should be more worried about Autotune being used specifically to cover up shitty out-of-tune vocals.  This segues nicely into our second point, which is that Cher’s recording engineers weren’t the first people to use Autotune to fix up some bum notes, but they were the first to set the retune dial to zero (probably by accident while trying to fix some of Cher’s notoriously limp singing) and go “hey, we actually LIKE the sound of that, let’s put it in the final mix that way”.  Then they lied about what effect they were using in the hope that their use of the pitch-corrector would remain a music industry trade secret.  Why would they lie?  Because they were likely using more subtle edits with Autotune to fix shitty vocalists’ bum notes probably for a long time before they worked with Cher, and didn’t want music fans to know that they could do that.  So what WAS the first recording to use Autotune?  We’ll never know, and that’s exactly my point.

Autotune is like Photoshop’s image-editing facilities, but for the voice.  It’s similar in three key ways:

1.  It “fixes” shit

2.  It’s in everything and I mean EVERYTHING

3.  Sometimes it’s deliberately obvious, sometimes it’s accidentally obvious, but when a really skilled practicioner is using Autotune or something like it to hide something, even an expert can’t tell

Watch the following video, and then listen to some of your favourite k-pop songs again.

Someone who only associates Autotune with the robotic-sounding “zero retune speed” setting could be forgiven for thinking that Autotune has somewhat fallen out of vogue in k-pop in recent years, because there are less new releases that feature its signature mechanical tone-snapping oscillations.  This would be incorrect: only “zero retune speed Autotune robot sass” has fallen out of vogue – Autotune as a subtle pitch-corrector that fixes fuckups and makes your bias sound like they know what they’re doing when they really don’t is now more prevalent than ever before.  Professional photographers working with models will routinely run ALL their images through Photoshop and make adjustments, it’s become a standard tool of the trade, and the same applies to Autotune and the music business now.  Every vocal track by every artist with any kind of budget behind them is run through the magic fix-it box.  Only independent artists, artists with a bee up their ass about Autotune (plus the power to make the engineers listen) or artists working in styles where precise vocal pitching isn’t required (rap, punk, death metal) wouldn’t use it (although even in these fields some of them do anyway).  Combine this with k-pop’s obsession with making as “perfect” a product as possible and it’s pretty safe to say that there isn’t a single k-pop album in your collection that doesn’t have Autotune smothered all over it like k-netizen’s cum over a computer monitor showing Dal Shabet’s “Be Ambitious” MV.  Artists in the pop field generally won’t say no to a bit of subtle non-detectable Autotune on their voice for the same reason that models won’t object to a Photoshopper making them look just that little bit skinnier and more toned.

Oh, and because the effects can be made to work in real time audio engineers can trigger them in live performances too.  Without you even fucking knowing.  So you can bash all those “idol vocals” threads on Allkpop and Onehallyu forums straight up your ass, because none of that shit really matters a goddamn.

aileeqei copy

Another thing to remember is that before Autotune there was a thing called the Vocoder which has been around since the 1970s, also pitch-corrects vocals and sounds exactly the same as Autotune’s “robot voice” if used in the same way.  A Vocoder works slightly differently however, rather than adjusting your pitch in real-time to a pitch assigned by the software itself, it adjusts your pitch in real-time to a pitch assigned by another musical instrument (usually a keyboard).  This allows a singer to be able to sing ANY note on a keyboard, even notes outside of their vocal range, and even chords.  And it sounds just as robotic-as-fuck as Autotune does, so it’s easy for the untrained ear to confuse the two.  Vocoder is what Kraftwerk, Daft Punk, and J-poppers Perfume use in all their shit, but if you want a k-pop example, here you go:

Programming all those vocal slides and chords would be a pain in the ass with Autotune (but not impossible) however very easy with Vocoder – you just get a keyboardist to plug in and play that stuff and sync the vocals to it – it would take as long to do as the song takes to listen to.

Points to take away from this post:

*  4Minute’s “What’s Your Name?” isn’t a shit song because it has Autotune.  It’s a shit song because Brave Brothers thought that getting Hyuna’s “Ice Cream” and stripping away all the melody and everything else that made that song decent and replacing it all with computer fart noises was a good idea.  That’s a separate issue to Autotune because you can actually get all those exact same noises with a Vocoder if you wanted.

*  If your bias is on the commercial end of k-pop in 2013, your bias uses Autotune, or something like it.  Period.  No ifs, ands, or buts.

*  People in the industry laugh at what fans and singers alike think they know about vocal production/staging.

But Autotune isn’t quite perfect yet.  It still can’t fix up Bom.

Maybe in 20 years or so technology will have advanced and we’ll get computer software that can make Bom’s voice good enough to the point where she doesn’t have to blow out an entire GD&TOP studio session.  We’ll probably have a fix for global warming, overpopulation and the bees-mysteriously-dying-out thing by then, too.

*  When I say “Autotune” you can assume that by this I mean “Autotune plus other pitch-correcting software that also acts like Autotune”.  I know that if I don’t put this disclaimer here some smartass cunt will go “but what about [pitch corrector x nobody has heard of]” or some shit.  Despite what the guy says in the WavesTune video above, It’s really all the same shit and it all does the same job.

12 thoughts on “Does your bias use Autotune: or – does a bear shit in the woods?

  1. I don’t find any of your articles boring at all. As a dumbfuck with absolutely no musical knowledge whatsoever I found this, the Shure Super 55 article and pretty much everything else musically relating absolutely fascinating. Don’t tell anyone at AKF this, but I think your WAY too good for them. You should be writing for Time magazine or some other fancy-pancy, famous magazine where your opinion and information will reach millions (Although, you might not get enough freedom to write what you want or legitimately valuable information so maybe that isn’t such a good idea). I’m often able to use minuscule snippets of information I find on this site to out-wit fangirls and trolls despite the fact that even I myself mainly have no fucking clue what I’m talking about. As fun as it is too trick people into thinking you know what you’re talking about, I find it much easier to use information from this site to back up my arguments and make me sound all intelligent like.
    Keep writing! You’re doing a great job! You’re actually kinda inspiring! Haha!

  2. Thanks a lot for this great, educational article on autotune (and the rest it entails). I had no idea to which heights a singer’s voice can be, positively, modified. There’s isn’t a lot to say here, but thanks again. (Also for the little giggle at the end, sorry Bom, you looked like a nice person there but your recording session was, if anything, thought-provoking…)

    Another interesting point would be autotune being used in live-performances – looking backwards it would add up to why there seems to be a large gap between idol singers singing a few words, in measly quality, on variety shows or radio broadcasts (without music and thus backup) and showing a greatly increased, though not overly great, performance when singing their entire songs in a music show or such (with music). On the other hands I’ve come to a point where I’ve difficulties telling singing idols and lip-syncing idols even remotely apart.
    Fans mostly seem to assure you of their biases singing live but I often notice a disparity between the length of a note and the mouth/microphone movement of the person singing. I almost never hear breathing sounds, or a wrong note or the “live sound” of a voice. A very good example would be f(x) but also SNSD. The few times I was sure Taeyeon (and a few others) was actually singing live I felt her voice sounded quite pressed and I kept holding my breath for her to somehow struggle through her lines. Maybe she was ill, maybe it was one of ther bad days but it does get one thinking how all these perfect-sounding EXOs and whatnot could possibly be singing live without even showing a single flaw. Or are they THAT great?
    But then on the other hand we’ve groups such as MBLAQ and INFINITE, whose members seem to sing live (partly?), but where you evidently hear breathing noises (they are dancing after all) and many notes going completely off. Are those guys simply incapable? And are they already singing with autotune?

    I can’t tell, and I am confused. What’s your take on this?

    BTW, you are a registered member over at OH, aren’t you? I think, just a couple of days ago, I came across a post from you about how the medium used to transmit a performance greatly influences the audience’s perception of the stage-presence of a singer and your point being that watching an artist on TV doesn’t measure up. You cited your experience working in the music industry and at a gazillion concerts as your source for your information. That’s you, isn’t it?

    • Yeah that’s me.

      A proper answer to your question would be lengthy and require another blog of its own, but the main point is not to worry about it so much. Singing quality simply isn’t that important in this style. People should forget about who can sing and who can’t and just enjoy the music (or not).

  3. Pingback: How To Stay Sane In the Kpop Fandom part 4 | saphirya

  4. Hi how are you?
    As somebody who is both a musician and a ginormous fan of Perfume, I can honestly tell you that Yasutaka, Perfume’s producer, is using Antares on most of Perfume’s tracks where there’s vocal processing going on (which is pretty much all of them except for their more recent stuff). Especially during their Game era, he definitely abused Antares all to hell. Vocoder…nah, not so much.
    Highly informative article nonetheless and I commend you for actually showing some insight into how things are done with music production. I do have a question for you though… when performers are onstage live, such as Perfume, I notice that it seems they blend their actual singing vocals with vocals being played out as part of the song. Then every now and then they will shout or sing something and you can hear it really loud… are my ears betraying me? Is it ALL lip synced? I saw Perfume recently and I swore I heard them singing live but also heard the backing vocals at the same time. Just curious if you know what’s going on in this respect? Thanks again for a great article.

    • The way I hear it Antares is being used but in a vocoder-like way, i.e the “correct note” isn’t sourced from the singer’s approximate note, but from an external source. It seems too on-the-button to be vocal-approximation sourced Autotune. I could be wrong.

      I think Perfurme do what k-pop groups do – they sing over a backing track which contains their own vocal tracks – rather than an instrumental mix.

      • ah, I think you’re exactly right about how Yasutaka is using Antares in a Vocoder style fashion. I know he does some stuff with Vocoding in Capsule. There’s alot of footage online I think to support this.
        As for answering my question, that makes total sense to me… it’s just sometimes really weird to hear in a live setting you know? I love how in Asian countries this is completely acceptable but in the US you are criticized mostly for not singing live. I prefer to hear things a bit in the middle… kind of perfect, but kind of not, you know?

  5. Great and informative article.

    Just discovered your blog and am enjoying the hell out of it.

    Another parallel with photoshop is that if the original you start with is really good, or at least in the ballpark, it’s really easy to get a natural, good-looking result with some touching up that leaves people guessing–or not even suspecting–that photoshop was used.

    My understanding is that autotune is pretty much the same. If you start with a tone deaf singer who would have a hard time matching pitch with a vacuum cleaner engine in ten tries in a recording studio, you’re probably not going to be able to get a clean, natural sounding result even with an expert audio engineer working overtime with autotune. Bottom line: The singer’s talent still matters, which is why a lot of the groups populated with lesser talents in the singing department tend to do a lot of singing in unison over a very limited range of notes.

  6. In talking about the parallel with photoshop, I forgot to mention the key thought, which is that if you start out with a really bad image, no amount of tweaking and fiddling with photoshop functions is going to get a result that looks natural…and this is what leads to the similar observation with autotune. Autotune isn’t going to fix a really bad singer, unless you get that bad singer to agree to stick with a very limited range of notes that he/she can handle without majorly f*cking up. Otherwise, you may as well go with the zero-dialing approach and call it “neo-funk android pop” or something like that, so you can start moving product.

    • Autotune-type products give a lot more control than Photoshop does though, that’s where the analogy falls apart somewhat. It gets pretty technical from here and I don’t feel like going into it in a comment, but it’s possible that this post may see a sequel someday.

Comments are closed.