Kpopalypse’s music theory class for dumbass k-pop fans: part 12 – signal flow

It’s the return of Kpopalypse’s music theory class!  This time around, we’re taking a look at signal flow!

A lot of readers probably wouldn’t consider signal flow to actually be part of music theory, and in the traditional sense of what music theory means, they’d be correct.  However I tend to interpret “music theory” as “things that allow you to create music in practice”, and while once that was considered by many people in the western world to just be “lines and stems and circles and how to interpret all that stuff”, in the modern age the game has changed drastically.  The average person who wants to become a songwriter these days is probably not going to start their songwriting journey by writing ideas down with a pen and lined paper, they’re probably more likely to start by recording sounds into a computer using a digital audio workstation (DAW) and then trying to manipulate those sounds into their own creative vision.  Therefore it’s relevant to cover aspects that are relevant to this process. 

The elements of DAWs vary between one DAW and the next, and it’s not really worth going into individual DAWs and how they work in detail, there’s plenty of instructional content for that, plus every DAW ever comes with a hefty instruction manual that you could read if you wanted.  It’s far more important to understand the principles of DAW operation, and to do this it really helps to understand signal flow, for a few reasons.  Firstly, the principles of signal flow are universal no matter what context you’re looking at sound in.  To take the simplest possible example:

Input = sound going into a device

Output = sound coming out of a device

Observe the video with Loona’s Chuu singing “Heart Attack” for a radio performance, chosen because it’s one of the rare radio performances that exist where we can definitely confirm that signal flow is behaving as commonly expected (i.e nobody is cheating by “lipsyncing” or “miming” – two words which really mean “deceptive signal flow”).  Once can perceive the situation simply, as:

This much is easy for anyone to understand.  However, the business of making and performing music comes with the task of managing multiple inputs and outputs, and that is certainly the case in the above video, which actually has a complicated array of inputs and outputs being managed in various ways, many of which may not be completely obvious. 

Let’s break down the scene of Chuu doing a performance at the radio station, and demonstrate the types of signal flow involved.  Doing this will give you a good overview of signal flow in general.  Obviously we’re starting with the basics here for the purposes of instruction and I’m sure the usual pieces of shit who hate-read my writing are already drafting their “Kpopalypse he’s such a condescending cunt” essays, but bear with me as this post will start simply and gradually increase in complexity, like a good IU ballad, or a fanfiction about Wonho’s abs.

So once the sound from Chuu enters the microphone and into the big yellow cord, that cord then goes into a plug in the brown wall to her right.  On the other side of that wall, behind the grey door, is the recording control room.  This control room has a panoramic corner window so whoever is in there can see both the stage and the DJ desk.

The microphone that Chuu is using isn’t the only one that’s feeding into the control room.  There are many microphones in this scene, and each one of them would also be connected to the control room.  In the below picture I’ve circled all the microphones in view.

In this shot we have:

  • Two microphones for the DJs (close by, on the left)
  • Three microphones for the guests
  • A fourth guest microphone sitting on the table, which is unplugged
  • A vocal microphone for Chuu
  • Three more vocal microphones and one overhead microphone, sitting on stands (at the back of the room, our left)

Although only three microphones are set up for the guests, the DJ desk actually has room for eight guests (presumably to seat all of OT8 Girls’ Generation), you can see each input, which is the wedge structures.  Each of these inputs is wired under the desk and also out to the control room.

In the control room, we need a device that can handle all of these inputs.  Of course not every studio session at the radio station is going to involve two DJs, eight guests, four onstage performers plus something else you might want to mic at a distance.   In this case, only three of these inputs are actually in use. I’ve circled the inputs below and also traced the cables that are in use on the desk.

In the control room, there will be another cord for each one of these microphones (or potential microphones) that comes out of the wall, and goes into a control desk, which might look something a little like this analog mixing desk:

Or it may be a digital mixing desk, like this one:

Both of these mixing desks would be equally capable of doing the job required.  However for the purposes of this post we’re going to look at analog mixing desks.  The reason why is that analog mixing desks are very much “what you see is what you get” – everything is just a knob, a switch or a cable, you don’t need to dig through menus to find settings, so it’s much easier to explain how signal flow works when looking at an analog desk as the desk itself basically lays everything out cleanly for you, unlike a digital desk where signal flow elements can be buried in menus that are not obvious at first glance.  Plus, most analog desks conform to pretty much the same design principles, whereas there’s no real fixed standard for digital desk menus, so analog desks are easy to learn and once you know one, you know most of them, whereas with digital desks you usually need to learn each one separately.  Also the principles of digital signal mixing are hard to explain if you don’t understand analog mixing anyway, as digital desks are generally designed with analog layouts as the starting point, so here we go with analog desks.  What’s happening to Chuu’s voice and all those other voices in the radio station?  If you had to get that vocal performance in (input) and get a radio version of “Heart Attack” out to the listener (output) – how would you do it?

The first thing to do would be to plug each microphone signal, once it arrives into the control room, into a separate channel on the desk.  If the entire desk is this:

Then just one channel is one vertical strip on the desk, which is this:

Controls vary on mixing desks but an important principle to remember is that generally speaking, inputs are at the top, and signal flows from the top of the channel strip to the bottom, hitting each different knob or button on the way down.  So always think of the signal going from top to bottom when looking at these controls.

Starting at the top, what do all these things do?  Let’s RTFM and find out.  In this case we’re going to assume that we’re looking at Channel 1 on the far left, highlighted above, and that this is the channel where Chuu’s microphone is plugged in.  Here’s the channel, from top to bottom, with what everything does (it’s actually explained here, and thanks to those guys for scanning an instruction manual so I didn’t have to, but I’m going to explain it in a way which hopefully makes more actual sense for someone who likes Chuu but doesn’t typically spend large chunks of their life reading mixing desk manuals).

 

1. Direct output.  This is for if you want to take a separate feed of this channel ONLY, and send it somewhere special (for instance, if you wanted to make a Chuu MR removed video).  We’re not going to be weird and do this, because we have lives, so you can ignore this bit.

2. The “mic” input.  This is where Chuu’s microphone cord ends up plugging into the desk after it comes out of the wall in the recording control room.  The input type is standard three-pin XLR which is industry standard for microphones.

3. Phantom power on/off switch.  Some types of microphones (usually expensive studio condenser microphones) require 48 volts of “phantom power” in order to work.  Chuu’s microphone is a dynamic style vocal microphone and does not require this.  However the DJ’s mics and guest microphones are condenser style, so it’s possible that they might.  (More on the differences between microphones here.)  Phantom power is not harmful to mics that don’t require it (except some very old pre-WWII models nobody uses anymore, not even Wonder Girls) but you wouldn’t turn it on if you didn’t need it as it’s just a waste of power.  

4. The “line” input.  Used for plugging in instruments and electronics rather than microphones.  If Chuu was actually a piano keyboard, or a drum machine, or the Chuu+ twitter robot, she would be plugged in here.

5. Channel insert.  Sometimes you want to put something directly in the signal path to affect whatever the signal is.  Perhaps we want to put a compressor on Chuu’s voice.  We can plug the compressor into here using a stereo cable, and then all of Chuu’s voice will have to go through the compressor.  The signal comes out of this hole via one side of the stereo cable, into whatever the device is, and then back into the same hole through the other side of the stereo cable and keeps going.  There’s also some other creative uses of a channel insert.  More info here if you’re interested.  For the purpose of this example (Chuu’s voice), we won’t be using this channel insert.

6. Channel gain. This amplifies the signal.  How much to amplify?  You want to give the signal enough gain so that when you use the faders at the bottom of the channel (we’ll get to those, further below) putting those faders at around the 0dB range and the little fuzzy dotted part just above and below the 0 section of the fader gives you a nicely loud, clear Chuu.  The amount of gain needed will vary a bit based on the loudness of the original signal, and also depending on whether the signal is a microphone input or a line input signal.  Microphone signals are quieter than most signals from electronic devices, so microphone signals almost always require more gain than line signals.

7. 100Hz high pass filter on/off switch.  Sharply cuts all bass frequencies below 100Hz (in other words, lets all the high sound pass through).  Why would someone use this?  There’s two reasons.  Firstly, when recording vocals, a lot of people have much bass in their voice especially when they pronounce hard syllables like the letter “p”, which can produce a bass-heavy burst of air which is very unpleasant.  They may also breathe into the microphone heavily.  Sometimes you can use a pop shield to stop the air burst, but in the case of recording a rampaging Chuu who is unlikely to stand still in front of a pop shield on a radio broadcast, this may not be an option.  Turning the high pass filter on will stop those deep Chuu air gasps from destroying the entire mix with bass-heavy wind noise.  The second reason why you may use this is that some electronic instruments may be poorly electrically grounded or just have cheap wiring in general, and may emit at 50Hz or 60Hz hum when in use, turning the filter on will eliminate this hum.  Of course the filter will also eliminate any other more desirable bass frequencies, so you wouldn’t use this filter for an instrument that you wanted to hear really low bass sounds from.  We will definitely use it for Chuu, however.  Chuu can’t sing below 100Hz anyway (trust me, she can’t) so there’s no reason to roll the dice and not cut the ultra-low frequencies off.

The next lot of controls are all for frequency equalization (EQ).

HF – high frequency EQ.  Provides a cut or boost of up to 15dB at 12KHz. 

HM – high mid frequency EQ.  You can adjust both the amount of cut or boost, AND the actual frequency to cut/boost here.  Provides a cut or boost of up to 15dB anywhere between 500Hz and 15Khz.  Note that when cutting or boosting a certain frequency, you’re not boosting just that frequency on its own – there’s a fairly large “effective area” on either side of that frequency that is also affected by the cut or boost.  For instance if you boost at 800Hz, 750 Hz will be quite strongly affected as well, but 700 Hz not so much, 650 Hz even less, and so on.

LM – low mid frequency EQ.  Works exactly like HM, but for lower frequencies.  Provides a cut or boost of up to 15dB anywhere between 35Hz and 1KHz.

LF – low frequency EQ.  Provides a cut or boost of up to 15dB at 80Hz.

EQ IN – Equalizer on/off switch.  This turns on or off all the equalisation controls, regardless of what positions any of the other dials are set to.  It’s a good idea to leave the equalizer off if you don’t intend to use it, as less things acting on the signal as it passes down the channel means less noise in your signal.  The amount of noise from one EQ circuit is incredibly small, but over many channels on a desk, the noise can add up.  We will assume that we are not equalizing Chuu, as a Chuu is never equal.

This next section of the channel is the auxillary (AUX) section.  Stay with me here as we are now getting to a very important concept of signal flow, one which makes mere mortals fall to their knees and weep.  Only the strong who stan Loona will survive this section of the post and gain understanding of auxillary trufax.

All mixing desks that are above the very basic small models have extra channels called auxillary channels.  An auxillary channel takes a copy of the signal, in this case Chuu’s voice, and sends it somewhere else.  It doesn’t interfere with the original signal, which still exists and still travels down the original channel uninterrupted.  For visual learners, you will want to think of the downward-traveling signal being split off to the side, like this:

So why would we split off a copy of the signal?  Well, one big reason is for the performers to hear their own voices, also called “foldback”.  You know those funny wedges you sometimes see on rock concert stages, and those sexy in-ear monitors that your favourite k-pop idols wear?  An audio engineer running a mixing desk can send the auxillary output through to these monitors so the performers can hear themselves.  This off-to-the-side signal actually then gets fed through to the “auxillary out” on the top right of the desk, and this is then a cord that would go to foldback wedges, or to a wireless kit connected to an in-ear monitor etc. 

You may be wondering however – why wouldn’t we just play them the main mix instead, why do they need a special signal?  The reason is that different performers may need to hear different things in order to perform well.  For instance, Chuu might want to hear her own voice in her monitors clearly so she can nail the pitch accuracy, but she probably doesn’t want to hear the idiot banter of the DJs which might be really offputting and stop her from hitting the notes correctly, especially if those losers start trying to sing along.  So we would send Chuu a copy of her own vocal to listen to as she sung, but we wouldn’t send her any of the DJ’s vocal while she was singing.  This would suit Chuu, but it wouldn’t suit another performer perhaps.  Let’s use another different example and pretend for a moment that we were mixing A-yeon’s drumming for a live concert, where she wore in-ear monitors and drummed along to a song on a recording that was being played to her in those monitors:

In her in-ear monitors, A-yeon is definitely going to want to hear the recording that she’s drumming along to, nice and loudly, so she can stay perfectly in time with it – however she is probably not going to want to hear all that much of her own drumming, because her own drums are already close to her and already loud, and if they are too loud they would just get in the way of her hearing the drums on the backing track.  Maybe she’d like none of her own drums in the monitors, or maybe she’d like just a small amount, but she’ll definitely want the backing track very high so she can hear it over the sound of her own drumming.  However the audience doesn’t want this – the audience wants to hear A-yeon’s drumming louder than the drumming on the backing track, because they’re there to hear A-yeon’s drumming specifically rather than a backing track they can just listen to at home.  If we gave A-yeon the same mix that an audience would like, it would probably throw her off her drumming a bit, and if we gave the audience the kind of mix that A-yeon needs to drum correctly, the audience would just get annoyed.  That’s why auxillary channels are useful – each performer can be given their own customised mix of the song that suits their special needs the most and allows them to perform the most effectively, independent of the mix that is being delivered to the listener.

Going back to the channel strip above, you’ll see that there are six auxillary channels labelled 1 through to 6.  The first two are also labelled PRE, the last two are labelled POST, and the middle two have a button whether either PRE or POST can be selected.  This stands for “pre-fader” and “post-fader“.  A “pre-fade” mix sends the signal to the auxillary channel straight away, i.e right where the auxillary volume control is, before it hits the master fader.  This is the setting that you would use for foldback and in-ear monitoring, because when you turn the main volume of Chuu or A-yeon up and down, you don’t want the volume to change in her in-ear monitors – as the entire point of the auxillary, as we’ve just discussed, is that the two volumes (the volume of the auxillary vs the volume of the main mix) act completely independently.  A “post-fade” mix instead waits until after the main volume fader acts before taking a signal off to the auxillary, which can be useful for other things, such as sending to an effects circuit, where you wouldn’t actually want more of the signal to go to the effect than what was in the original fader mix, because you want the volume of the effect and the volume of the main signal to stay relative to each other in a certain ratio.

So I’m glad that’s clear, then.  Moving on.

PAN – stereo panning.  This decides if the signal is going left, right, in the middle, or anywhere in between.  Allows the signal to be assigned to the L or R bus… we’ll get to busses in a moment.

MUTE – channel mute.  Silences all signal in the channel.  This button has a light next to it so it’s very obvious if it’s on.  That’s because if you’ve got a mute button on by mistake it’s the kind of thing you really want to be aware of as quickly as possible.  We’ve all been to “that gig” where you couldn’t hear the vocalist for the first 30 seconds of the first song because someone didn’t realise that they forgot to unmute their microphone.  The light stops you from being “that person”.

PFL – pre fade listen switch.  Allows you to listen to the signal before it gets to the volume fader, it’ll pump the signal through to your desk’s headphone output, and also it’ll show up on the visual meters, whether the channel is muted or not, and whether you’ve got the main volume up or not.  This is a way to check to see whether signal is actually coming through the channel without actually sending a noise through the desk that anybody can hear.  This is generally used in conjunction with the mute button to do a silent “line test”, which is why the two buttons are close together.  Keep in mind though that the PFL button doesn’t actually do the muting the sound itself, that’s what the mute button (or having the volume fader down) does!  So don’t make that mistake.

Finally, we get to the volume fader at the bottom of the channel., the big sliding thing which decides how loud Chuu is going to be in the main mix.  There’s a couple lights at the top to assist in acquiring optimal Chuu: 

PK! – peak light.  This is a red warning light that only flashes when the desk is a few dB shy of having so much level pumped into it that it will distort any output.  Although there are a few edge cases where this can sound cool, generally speaking you do not want to distort the output of a mixing desk.  If you want distortion there are generally better ways.

SIG – signal light.  Lights up when the signal to the fader is within a generally desirable threshold.  The louder you go, the brighter this light gets.  Ideally when mixing, you want nice and bright green signal light but no red peak light.

There’s also some “assigning” switches:

L-R – left-right assign button.  Sends any signal after the fader over to the left and right group busses.

M – mono bus assign button.  Sends any signal after the fader over to the mono bus.

1-2 – group 1-2 assign button.  Sends any signal after the fader over to the group 1-2 busses.

3-4 – group 3-4 assign button.  Sends any signal after the fader over to the group 3-4 busses.

Okay, so you thought the shit in this post you didn’t understand stopped with auxillaries, well guess what it’s now time to explain busses.  If you’re mixing in mono then you would assign all your channels that you were using to the M bus, and then the M fader on the bottom right of the desk (third from the end, outlined in red below) becomes your master volume.  However if you wanted to mix in stereo (which you definitely would for Chuu on the radio because while Chuu herself is fine in the middle of the mix, we want the saucy “Heart Attack” backing track in its full stereo glory) then instead you would press L-R and assign to the stereo channels, and the two stereo faders together in the very bottom right corner (outlined in green below) are now your master mix.  Simple, right?

“But wait a second…” I hear you thinking with my super Boram ESP powers “…what are the group buttons for?  Why would we want to just send the faders to some random-ass groups (outlined in blue, above) that aren’t even the main output, aren’t we then just going to have to get that signal over to the main output anyway?  What’s the point of having a special group?”  To answer this, we need to once again consider A-yeon’s needs carefully, as one might.

Let’s say we’re putting microphones on A-yeon’s drum kit for a special performance, where she’s playing drums for Blackpink.  The final setup involves several microphones, one for each of these locations:

  • Snare drum
  • Bass drum
  • Small rotor tom
  • Medium rotor tom
  • Large floor tom
  • Hi-hat cymbal
  • Ride-cymbal
  • Overhead crash cymbals left
  • Overhead crash cymbals right

That’s nine microphones, covering the full drum kit.  However there’s also a bunch of other microphones and instruments too:

  • Jennie’s vocal
  • Lisa’s vocal
  • Rose’s vocal
  • Jisoo’s vocal
  • A bass player
  • A guitar player
  • A keyboardist
  • A machine that plays samples and backing tracks
  • The audio feed for the videos that play during between-song costume changes
  • A backing vocalist who definitely isn’t there to help anyone with the high notes

Then let’s say, once you’ve set up these microphones and got all your levels perfectly balanced, so everything is at exactly the right volume compared to everything else, and everything is perfectly ready to go, someone says “can we have the drums up a bit louder in the mix please”.  You ask “which drum?” and the reply comes back “all of them!”  Now you have to adjust nine individual faders, all exactly the same equal distance, without ruining your perfectly balanced mix.  In a best case scenario, you can do this but it’s a time-consuming pain in the ass, but in a worst-case scenario you might fuck up remembering what you did or didn’t do already halfway through and ruin your entire balance in the process.  It would be far easier to just control the entire drum mix with only one or two faders, right?  Well, you could use the master faders – but then that would effect the volume of everything else too, and you only want to adjust the drums, not the entire mix including Jennie who is probably going to get the shits with you.  The solution – assign only the drums to groups 1-2, and assign group 1-2 to the main mix.  Now, when all the drums need a total volume adjustment you can use the group faders for group 1-2, and when only one drum requires adjustment, you can just use the fader for that one individual drum.  Happy A-yeon, happy crowd, happy you.

We’ve just talked about mono channels, how to pan and mix them, but in the case of the Chuu radio spot, what are we actually mixing?  Let’s assign channels to microphones!  There’s one DJ and three guests in the video, so it makes sense to do it this way:

Channel 1 – Chuu
Channel 2 – DJ mic 1
Channel 3 – Guest 1
Channel 4 – Guest 2
Channel 5 – Guest 3
Channel 6 – backing track
Channel 7 – backing track

Nice and simple, right?  But why are we assigning two channels to the backing track?  Because it’s in stereo, so we plug it into two different channels, pan one channel hard left, the other hard right, and there’s our nice stereo mix just like the original recording.  Our other option would be to use the stereo channels on the desk, which actually do the panning job for us and we only have to move one fader for the backing track then.  It really doesn’t matter a lot as moving two faders in sync isn’t really any harder than moving one, it’s not like trying to move nine a certain amount.  But it’s good to know many desks have this option, and it does save room on the desk when you’re running lots of channels at once.  (The stereo channels are outlined in blue below and have the blue fader handles).  Also, these stereo channels are actually specifically designed with backing tracks in mind, that’s why there’s no mic input on them, as a backing track from a CD, MP3 player etc would be a line input device, so instead you can see normal jack as well as RCA inputs at the top of the channel because that’s what most playback devices use.  So it can be a user-friendly option to go this way.

So, given that this is a radio station, and there’s all sorts of other shit floating around in the studio which isn’t even being used here, the real distribution of channels (assuming they were using the specific desk depicted in this environment, which is statistically unlikely, but will do for the purposes of this post) would probably be something like this:

Channel 1 – DJ mic 1
Channel 2 – DJ mic 2 (off)
Channel 3 – Guest 1
Channel 4 – Guest 2
Channel 5 – Guest 3
Channel 6 – Guest 4 (off)
Channel 7 – Guest 5 (off)
Channel 8 – Guest 6 (off)
Channel 9 – Guest 7 (off)
Channel 10 – Guest 8 (off)
Channel 11 – Vocal mic 1 (Chuu)
Channel 12 – Vocal mic 2 (off)
Channel 13 – Vocal mic 3 (off)
Channel 14 – Overhead mic (off)
Channel 15 (stereo) – Chuu’s backing track
Channel 16 (stereo) – The DJ’s CD player/Computer/MP3 player etc

If we wanted to be fancier we could even mix the use of the stereo channel with a group assignment, and the full signal flow picture could look like this:

Channel 1 – DJ mic 1 – assign group 3-4
Channel 2 – DJ mic 2 (off)
Channel 3 – Guest 1 – assign group 3-4
Channel 4 – Guest 2 – assign group 3-4
Channel 5 – Guest 3 – assign group 3-4
Channel 6 – Guest 4 (off)
Channel 7 – Guest 5 (off)
Channel 8 – Guest 6 (off)
Channel 9 – Guest 7 (off)
Channel 10 – Guest 8 (off)
Channel 11 – Vocal mic 1 (Chuu) – assign group 1-2
Channel 12 – Vocal mic 2 (off)
Channel 13 – Vocal mic 3 (off)
Channel 14 – Overhead mic (off)
Channel 15 (stereo) – Chuu’s backing track – assign group 1-2, also assign to aux1 (Chuu’s in-ear)
Channel 16 (stereo) – The DJ’s CD player/Computer/MP3 player etc – assign group 3-4
Group 1-2 – assigned to L-R
Group 3-4 – assigned to L-R

Now we’ve got Chuu and her backing track on one group, and all the other DJs and guests on the other group!  So if for instance we wanted those DJ clowns to all shut the fuck up while Chuu sang, we could just manage that with the group 3-4 faders, and crank Chuu up to the sky with the group 1-2 faders until everyone begged for mercy.


This post hasn’t covered all the desk controls, as it’s not a tutorial on how to use a specific mixing desk, but rather just breaking apart some sections of one random desk to show you the basic concepts of signal flow, what sort of controls to expect on a desk, and how signals move through a channel and out the other side.  This is valuable to know as it’s the foundation that all more complex mixing equipment is based on.  I’ve tried to keep things simple so the basic concepts are easy to understand without getting too bogged down in the many different routing options (a lot of which I’ve glossed over or completely ignored).  In summary:

  • Channel signal goes in, then from top to bottom
  • Auxillary channels for foldback, effects etc split off from the input channel and go from left to right
  • Faders are assigned either straight to main mix or groups depending on if you’re mixing groups of things or individuals
  • Everything then goes to outputs – master output for the main mix, aux output for foldback/monitoring, etc
  • Stan Loona

Kpopalypse will return with more posts in due course!

3 thoughts on “Kpopalypse’s music theory class for dumbass k-pop fans: part 12 – signal flow

  1. Off-topic: In the A-yeon vid, the microphone over her left-most cymbal has a dented mesh grill. Intentionally reshaped for the purpose of picking up the cymbal? ..or just using a damaged mic because.. ..why not?

    • It’s just a mic with a dented grill. The dent has no effect on the sound. It’s an SM58 and they’re notoriously tough – the grill will dent a ton if you drop it or throw it at things or bash people in the face with it but the mic itself will keep working, that’s why they’re the industry standard dynamic mic for a lot of live applications.

Comments are closed.