Dance Music Manual The

TheDanceMusicManual TheDanceMusicManual

TheDanceMusicManual TheDanceMusicManual

User Manual: TheDanceMusicManual

Open the PDF directly: View PDF PDF.
Page Count: 539 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Dance Music Manual
This page intentionally left blank
Dance Music Manual
Tools, Toys and Techniques
Second Edition
Rick Snoman
AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD
PARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Focal Press is an imprint of Elsevier
Focal Press is an imprint of Elsevier
Linacre House, Jordan Hill, Oxford OX2 8DP, UK
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
First published 2009
Copyright © 2009, Rick Snoman. Published by Elsevier Ltd. All rights reserved
The right of Rick Snoman to be identifi ed as the author of this work has been asserted
in accordance with the Copyright, Designs and Patents Act 1988
No part of this publication may be reproduced, stored in a retrieval system or transmit-
ted in any form or by any means electronic, mechanical, photocopying, recording or
otherwise without the prior written permission of the publisher
Permissions may be sought directly from Elsevier’s Science & Technology Rights
Department in Oxford, UK: phone ( 44) (0) 1865 843830; fax ( 44) (0) 1865
853333; email: permissions@elsevier.com. Alternatively you can submit your request
online by visiting the Elsevier website at http://elsevier.com/locate/permissions, and
selecting Obtaining permission to use Elsevier material
Notice
No responsibility is assumed by the publisher for any injury and/or damage to persons
or property as a matter of products liability, negligence or otherwise, or from any use or
operation of any methods, products, instructions or ideas contained in the material herein
British Library Cataloguing in Publication Data
snoman, Rick
The dance music manual : tools, toys and techniques. – 2nd ed.
1. Underground dance music 2. Sound recordings – Remixing
3. Electronic composition
I. Title
781.6'41554134
Library of Congress Control Number: 2008935934
ISBN: 978-0-2405-2107-7
For information on all Focal Press publications
visit our website at www.focalpress.com
Printed and bound in the USA
09 10 11 12 12 11 10 9 8 7 6 5 4 3 2 1
This book is dedicated to my children: Neve and Logan.
This page intentionally left blank
ACKNOWLEDGEMENTS ................................................................................................................. ix
PREFACE ............................................................................................................................................. xi
MUSICAL WORKS............................................................................................................................ xiii
PART 1 Technology and Theory
CHAPTER 1 The Science of Synthesis ..............................................3
CHAPTER 2 Compression, Processing and Effects .....................31
CHAPTER 3 Cables, Mixing Desks and Effects Busses .............. 57
CHAPTER 4 Programming Theory .................................................. 77
CHAPTER 5 Digital Audio ................................................................ 127
CHAPTER 6 Sampling and Sample Manipulation ...................... 135
CHAPTER 7 Recording Vocals.........................................................151
CHAPTER 8 Recording Real Instruments ....................................169
CHAPTER 9 Sequencers .................................................................. 179
CHAPTER 10 Music Theory ...............................................................201
PART 2 Dance Genres
CHAPTER 11 House ............................................................................ 231
CHAPTER 12 Trance ........................................................................... 251
CHAPTER 13 UK Garage .................................................................... 271
CHAPTER 14 Techno.......................................................................... 283
CHAPTER 15 Hip-Hop (Rap) ............................................................ 295
CHAPTER 16 Trip-Hop ....................................................................... 313
Contents
vii
vii
viii
CHAPTER 17 Ambient/Chill Out ...................................................... 329
CHAPTER 18 Drum ‘ n ’ Bass .............................................................. 347
PART 3 Mixing & Promotion
CHAPTER 19 Mixing ........................................................................... 363
CHAPTER 20 Mastering .................................................................... 399
CHAPTER 21 Publishing and Promotion ....................................... 425
CHAPTER 22 Remixing and Sample Clearance ...........................449
CHAPTER 23 A DJ’s Perspective .................................................... 467
Appendix A Binary and Hex ..........................................................485
Appendix B Decimal to Hexadecimal Conversion Table ..........491
Appendix C General MIDI Instrument Patch Maps..................493
Appendix D General MIDI CC List ................................................ 497
Appendix E Sequencer Note Divisions .......................................499
Appendix F Tempo Delay Time Chart ..........................................501
Appendix G Musical Note to MIDI and Frequencies ............... 503
INDEX ................................................................................................................................................507
Contents
ix
ix
I would like to personally thank the following for their invaluable help, contri-
butions and/or encouragement in writing this book:
Catharine Steers at Elsevier (for being so patient)
Colin and Janice Lewington
Darren Gash at the SAE Institute of London
Dave ‘Cannockwolf’ Byrne
DJ ‘Superstar ’ Cristo
Mark Penicud
John Mitchell
Mick ‘Blackstormtrooper ’ Byrne,
Helen and Gabby Byrne
Mike at the Whippin ’ post
Richard James
Steve Marcus
Everyone on the Dance Music Production Forum
All music featured on the CD – ©Phiadra
(R. Snoman & J. Froggatt)
Vocals on Chill Out supplied by Tahlia Lewington
Vocals on Hip Hop supplied by MC Darkstar
Vocals on Trance and Garage supplied by Kate Lesing
Cover Design: Daryl Tebbut
Acknowledgements
This page intentionally left blank
xi
xi
If a book is worth reading then it’s worth buying
Welcome to the Dance Music Manual – Second Edition. After the release of the
rst edition way back in May 2004, I received numerous emails with sugges-
tions for a second edition of the book and I’ve employed as many of them as
possible, as well as updating some of the information to refl ect the continually
updated technology that is relevant to dance musicians. I’d like to personally
thank everyone who took the time to contact me with their suggestions.
As with the fi rst edition, the purpose of the Dance Music Manual is to guide you
through the technology and techniques behind creating professional dance
and club-based music. While there have been numerous publications written
on this important subject, the majority have been written by authors who have
little or no experience of the scene nor the music, but simply rely on ‘educated
guesswork . With this book, I hope to change the many misconceptions that
abound and offer a real-world insight into the techniques on how professional
dance music is written, produced and marketed.
I’ve been actively involved in the dance music scene since the late 1980s and,
to date, I’ve produced and released numerous white labels and remixes. I’ve
held seminars across the country on remixing and producing club-based dance
music, and authored numerous articles and reviews.
This book is a culmination of the knowledge I’ve attained over the years and
I believe it is the fi rst publication of its kind to actively discuss the real-world
applications behind producing and remixing dance music for the twenty-fi rst
century.
The Dance Music Manual has been organized so as to appeal to professionals
and novices alike, and to make it easier to digest it has been subdivided into
three parts.
The fi rst part discusses the latest technology used in dance music production,
from the basics of synthesis and sampling to music theory, effects, compres-
sion, microphone techniques and the principles behind the all-important
sound design. If you’re new to the technology and theory behind much of
today’s dance music, then this is the place to start.
The second part covers the techniques for producing musical styles including,
among others, trance, drum ‘n’ bass, trip-hop, rap and house. This not only
discusses the general programming principles behind drum loops, basses and
Preface
Preface
xii
leads for the genres, but also the programming and effects used to create the
sounds. If you already have a good understanding of sampling rates, bits, syn-
thesis programming and music theory, then you can dip into these sections
and start practicing dance music straight away.
The third part is concerned with the ideology behind mixing, mastering, remix-
ing, pressing and publication of your latest masterpiece. This includes the
theory and practical applications behind mixing and mastering, along with
a realistic look at how record companies work and behave; how to copyright
your material, press your own records and the costs involved.
At the end of the book you’ll also fi nd a chapter in which an international DJ
has submitted his view on dance music and DJing in general.
Of course, I cannot stress enough that this book will not turn you into a super-
star overnight and it would be presumptuous to suggest that it would even
guarantee you a successful dance record. Dance music has always evolved from
musicians pushing the technology further. Rather, it is my hope that it will give
you an insight into how the music is produced and from there it’s up to you to
push in a new direction.
Creativity can never be encapsulated in words, pictures or software and it’s our
individual creative instincts and twists on a theme that produces the dance
oor hits of tomorrow.
Experimentation always pays high dividends.
Finally, I’d also like to take this opportunity to thank you for buying The Dance
Music Manual. By purchasing this book, you are rewarding me for all the time
and effort I’ve put into producing it and that deserves some gratitude. I hope
that, by the end, you feel it was worth your investment.
xiii
xiii
“ I would like to remind record companies that they have a cultural
responsibility to give the buying public great music. Milking a trend to
death is not contributing to culture and is ultimately not profi table.
Dance music has always relied on sampling. From its fi rst incarnation of mix-
ing two records together on a pair of record decks to make a third mashup to
the evolution and consequent increased power of the sampler and audio work-
station. It would be fair to say that without sampling, dance music would be a
very different beast and may not have even existed at all.
For legal reasons I cannot suggest that any artist choosing to read this book
dig through their record collections for musical ideas to sample, but now more
than ever, it has become a cornerstone of the production of dance-based music.
The importance of sampling should not be underestimated and when used cre-
atively, it can open new boundaries and spawn entirely new genres of music.
Perhaps the best example of this is the Amen Break.
Back in 1969, a song was released by the Winston’s called Colour Him Father.
The B side to the record contained a track named Amen Brother. The middle
eight of this particular song contained a drum break just less than 6 s long
which was later named the Amen Break . It contained nothing more than a
drummer freelancing on his kit, but it became one of the largest factors in the
evolution of dance music.
When the fi rst music samplers were released in the 1980s, the Amen Break was
used considerably. The fi rst recorded use of this was by Shy FX ‘Original Nata ’ in
1984 but this was soon followed by Third Bass with a track entitled ‘Words of
Wisdom’ and NWA with ‘Straight Out of Crompton . Over the years, the Amen
Break became more and more widely used and appeared in tracks such as:
Mantronics: King of the Beats
2 Live Crew: Feel Alright Yall
4 Hero: Escape That
Amon Tobin: Nightlife
Aphex Twin: Boy/Girl Song
Atari Teenage Riot: Burn Berlin Burn
Brand Nubian: The Godz Must Be Crazy
Deee-Lite: Come on In, the Dreams are Fine
Dillinja: The Angels Fell
Eric B & Rakim: Casualties of War
Musical Works
Musical Works
xiv
Freestylers: Breaker beats
Funky Technicians: Airtight
Heavy D: MC Heavy D!
Heavy D: Let it Flow
Heavy D: Flexin’
Heavyweight: Oh Gosh
J. Majik: Arabian Nights
J. Majik: Your Sound
Lemon D: This is Los Angeles
Level Vibes: Beauty & the Beast
Lifer’s Group: Jack U. Back ( So You Wanna Be a Gangsta )
Ltj Bukem: Music
Maestro Fresh Wes: Bring it On ( Remix )
Movement Ex: KK Punani
Nice & Smooth: Dope Not Hype
Salt-N-Pepa: Desire
Scarface: Born Killer
Schoolly D: How a Black Man Feels
Goldie: Chico: Death of a Rock Star
Roni Size: Brown Paper Bag ( Nobukazu Takemura Remix )
Oasis : Do Y’Know What I Mean
Frankie Bones: Janets Revenge
Perhaps more importantly, though, the evolution of both breakbeat and jungle
alongside their many offshoots – are accredited to these 6 s of a drum break.
Indeed, an entire culture arose from a 1969 break as samplers became more
experimental and artists began cutting and rearranging the loop to produce
new rhythms.
If this break had suffered the same copyright laws that music does today it is
entirely possible that breakbeat, jungle, techstep, artcore, 2-step and drum ‘n’
bass – among many others – would never have come to light. It takes a large
audience to appreciate new material before it becomes publicized.
As record companies and artists continue to tighten the laws on copyright, our
musical palette is becoming more and more limited, and experimental music
born from utilizing past samples is being replaced with ‘popcorn’ music in
order to ensure that the monies returned are enough to cover the original fee
for using a sample.
Whereas scientists are free to build upon past work without having to pay their
peers and fi lm directors are free to copy the past, disgracefully music no longer
exhibits that same fl exibility. With the current copyright laws, musicians can no
longer appropriate from the past without a vast amount of paperwork, a good
solicitor, a large wallet and an understanding record company.
Record companies anxiously await the next ‘big thing ’ and voice concerns
over the lack of new musical ideas and genres. Yet, in the same breath they
Musical Works
xv
are – perhaps unintentionally – industriously locking down culture and plac-
ing countless limitations on our very creativity. The musical freedom that our
predecessors experienced and built upon has all but vanished.
Without the freedom to borrow and develop on the past, creativity is stifl ed
and with that our culture can only slow to a grinding pace. To quote Laurence
Leesig: ‘A society free to borrow and build upon the past is culturally richer than a
controlled one ’.
This page intentionally left blank
Technology and Theory
1
PART 1 PART 1
Technology and Theory
This page intentionally left blank
The Science of Synthesis
CHAPTER 1 3
3
3
Today’s dance- and club-based music relies just as heavily on the technology
as it does on the musicality; therefore, to be profi cient at creating this genre of
music it is fi rst necessary to fully comprehend the technology behind its creation.
Indeed, before we can even begin to look at how to produce the music, a thor-
ough understanding of both the science and the technology behind the music
is paramount. You wouldn’t attempt to repair a car without some knowledge of
what you were tweaking, and the same applies for dance- and club-based music.
Therefore, we should start at the very beginning and where better to start than
the instrument that encapsulated the genre – the analogue synthesizer. Without
a doubt, the analogue synthesizers were responsible for the evolution of the
music, and whilst the early synthesizers are becoming increasingly diffi cult to
source today, nearly all synthesizers in production, whether hardware or soft-
ware, follow the same path fi rst laid down by their predecessors. However, to
make sense of the various knobs and buttons that adorn a typical synthesizer
and observe the effects that each has on a sound, we need to start by examining
some basic acoustic science.
ACOUSTIC SCIENCE
When any object vibrates, air molecules surrounding it begin to vibrate sympa-
thetically in all directions creating a series of sound waves. These sound waves
then create vibrations in the ear drum that the brain perceives as sound.
The movement of sound waves is analogous to the way that waves spread when
a stone is thrown into a pool of water. The moment the stone hits the water,
the reaction is immediately visible as a series of small waves spread outwards
in every direction. This is almost identical to the way in which sound behaves,
with each wave of water being similar to the vibrations of air particles.
The Science of Synthesis
CHAPTER 1
CHAPTER 1
’ Today’s recording techniques would have been
regarded as science fi ction forty years ago. ’
PART 1
Technology and Theory
4
For instance, when a tuning fork is struck, the forks fi rst move towards one
another compressing the air molecules before moving in the opposite direc-
tion. In this movement from ‘compression’ to ‘rarefaction’ there is a moment
where there are less air molecules fi lling the space between the forks. When
this occurs, the surrounding air molecules crowd into this space and are then
compressed when the forks return on their next cycle. As the fork continues to
vibrate, the previously compressed air molecules are pushed further outwards
by the next cycle of the fork and a series of alternating compressions and rar-
efactions pass through the air.
The numbers of rarefactions and compressions, or ‘cycles ’, that are completed
every second is referred to as the operating frequency and is measured in Hertz
(Hz). Any vibrating object that completes, say, 300 cycles/s has a frequency of
300 Hz while an object that completes 3000 cycles/s has a frequency of 3 kHz.
The frequency of a vibrating object determines its perceived pitch, with faster
frequencies producing sounds at a higher pitch than slower frequencies. From
this we can determine that the faster an object vibrates, or ‘oscillates’, the
shorter the cycle between compression and rarefaction. An example of this is
shown in Figure 1.1 .
Any object that vibrates must repeatedly pass through the same position as it
moves back and forth through its cycle. Any particular point during this move-
ment is referred to as the ‘phase’ of the cycle and is measured in degrees, simi-
lar to the measurement of a geometric circle. As shown in Figure 1.2 , each cycle
starts at position zero, passes back through this position, known as the ‘zero
crossing’, and returns to zero.
Time Time
Low frequency High frequency
Amplitude
FIGURE 1.1
Difference between low
and high frequencies
The Science of Synthesis
CHAPTER 1 5
Consequently, if two objects vibrate at different speeds and the resulting wave-
forms are mixed together, both waveforms will start at the same zero point but
the higher frequency waveform will overtake the phase of the lower frequency.
Provided that these waveforms continue to oscillate, they will eventually catch
up with one other and then repeat the process all over again. This produces an
effect known as ‘beating’.
The speed at which waveforms ‘beat’ together depends on the difference in fre-
quency between them. It’s important to note that if two waves have the same
frequency and are 180 ° out of phase with one another there, one waveform
reaches its peak while the second is at its trough, and no sound is produced.
This effect, where two waves cancel one another out and no sound is produced,
is known as ‘phase cancellation ’ and is shown in Figure 1.3 .
As long as waveforms are not 180 ° out of phase with one another, the interference
between the two can be used to create more complex waveforms than the simple
sine wave. In fact, every waveform is made up of a series of sine waves, each slightly
out of phase with one another. The more complex the waveform this produces,
the more complex the resulting sound. This is because as an increasing number of
waves are combined a greater number of harmonics are introduced. This can be
better understood by examining how an everyday piano produces its sound.
The strings in a piano are adjusted so that each oscillates at an exact frequency.
When a key is struck, a hammer strikes the corresponding string forcing it
to oscillate. This produces the fundamental pitch of the note and also, if the
vibrations from this string are the same as any of the other strings natural
vibration rates, sets these into motion too. These are called ‘sympathetic vibra-
tions’ and are important to understand because most musical instruments are
based around this principle. The piano is tuned so that the strings that vibrate
sympathetically with the originally struck string create a series of waves that are
slightly out of phase with one another producing a complex sound.
1
2
3
Time
FIGURE 1.2
The zero crossing in a
waveform
PART 1
Technology and Theory
6
Any frequencies that are an integer
multiple of the lowest frequency
(i.e. the fundamental) will be in
harmony with one another, a phe-
nomenon that was fi rst realized
by Pythagoras, from which he
derived the following three rules:
If a note’s frequency is
multiplied or divided
by two, the same note is
created but in a different
octave.
If a note’s frequency is multiplied or divided by three, the strongest har-
monic relation is created. This is the basis of the western musical scale.
If we look at the fi rst rule, the ratio 2:3 is known as a perfect fi fth and is
used as the basis of the scale.
If a note’s frequency is multiplied or divided by fi ve, this also creates a
strong harmonic relation. Again, if we look at the fi rst rule, the ratio 5:4
gives the same harmonic relation but this interval is known as the major
third.
A single sine wave produces a single tone known as the fundamental frequency,
which in effect determines the pitch of the note. When further sine waves that
are out of phase from the original are introduced, if they are integer multiples
of the fundamental frequency they are known as ‘harmonics’ and make the
sound appear more complex, otherwise if they are not integer multiples of the
fundamental they are called ‘partials’, which also contribute to the complexity
of the sound. Through the introduction and relationship of these harmonics
and partials an infi nite number of sounds can be created.
As Figure 1.4 shows , the harmonic content or ‘timbre’ of a sound determines the
shape of the resulting waveform. It should be noted that the diagrams are sim-
ple representations, since the waveforms generated by an instrument are incred-
ibly complex which makes it impossible to accurately reproduce it on paper .
In an attempt to overcome this, Joseph Fourier, a French scientist, discovered
that no matter how complex any sound is it could be broken down into its
frequency components and, using a given set of harmonics, it was possible to
reproduce it in a simple form.
To use his words, ‘Every periodic wave can be seen as the sum of sine waves
with certain lengths and amplitudes, the wave lengths of which have harmonic
relations’. This is based around the principle that the content of any sound is
determined by the relationship between the level of the fundamental frequency
and its harmonics and their evolution over a period of time. From this theory,
known as the Fourier theorem, the waveforms that are common to most syn-
thesizers are derived.
180
FIGURE 1.3
Two waves out of
phase
The Science of Synthesis
CHAPTER 1 7
FIGURE 1.4
How multiple sound
waves create
harmonics
PART 1
Technology and Theory
8
So far we’ve looked at how both the pitch and the timbre are determined.
The fi nal characteristic to consider is volume. Changes in volume are caused
by the amount of air molecules an oscillating object displaces. The more air
an object displaces, the louder the perceived sound. This volume, also called
amplitude’, is measured by the degree of motion of the air molecules within
the sound waves, corresponding to the extent of rarefaction and compression
that accompanies a wave. The problem, however, is that many simple vibrating
objects produce a sound that is inaudible to the human ear because so little
air is displaced; therefore, for the sound wave to be heard most musical instru-
ments must amplify the sound that’s created. To do this, acoustic instruments
use the principle of forced vibration that utilizes either a sounding board, as
in a piano or similar stringed instruments, or a hollow tube, as in the case of
wind instruments.
When a piano string is struck, its vibrations not only set other strings in motion
but also vibrate a board located underneath the strings. Because this sounding
board does not share the same frequency as the vibrating wires, the reaction is
not sympathetic and the board is forced to resonate. This resonance moves a
larger number of air particles than the original sound alone, in effect amplify-
ing the sound. Similarly, when a tuning fork is struck and placed on a tabletop,
the table’s frequency is forced to match that of the tuning fork and the sound is
amplifi ed.
Of course, neither of these methods of amplifi cation offers any physical control
over the amplitude. If the level of amplifi cation can be adjusted, then the ratio
between the original and the changed amplitude is called the ‘gain’.
It should be noted, however, that loudness itself is diffi cult to quantify because
it’s entirely subjective to the listener. Generally speaking, the human ear can
detect frequencies from as low as 20 Hz up to 20 kHz; however, this depends on
a number of factors. Indeed, whilst most of us are capable of hearing (or more
accurately feeling) frequencies as low as 20 Hz, the perception of higher fre-
quencies changes with age. Most teenagers are capable of hearing frequencies
as high as 18 kHz while the middle-aged tend not to hear frequencies above
14 kHz. A person’s level of hearing may also have been damaged, for example,
by overexposure to loud noise or music. Whether it is possible for us to per-
ceive sounds higher than 18 kHz with the presence of other sounds is a subject
of debate that has yet to be proven. However, it is important to remember that
sounds that are between 3 and 5 kHz appear perceivably louder than frequen-
cies that are out of this range.
SUBTRACTIVE SYNTHESIZER
Having looked into the theory of sound, we can look at how this relates to a
synthesizer. Subtractive synthesizer is the basis of many forms of synthesizers
The Science of Synthesis
CHAPTER 1 9
and is commonly related to analogue synthesizer. It is achieved by combining
a number of sounds or ‘oscillators’ together to create a timbre that is very rich
in harmonics.
This rich sound can then be sculpted using a series of ‘modifi ers . The number
of modifi ers available on a synthesizer is entirely dependent on the model, but
all synthesizers offer a way of fi ltering out certain harmonics and of shaping
the overall volume of the timbre.
The next part of this chapter looks at how a real analogue synthesizer operates,
although any synthesizer that emulates analogue synthesizer (i.e. digital signal
processing (DSP) analogue) will operate in essentially the same way, with the
only difference being that the original analogue synthesizer voltages do not
apply to their DSP equivalents.
An analogue synthesizer can be said to consist of three components
(Figure 1.5 ):
An oscillator to make the initial sound.
A fi lter to remove frequencies within the sound.
An amplifi er to defi ne the overall level of the sound.
Each of these components and their role in synthesizer are discussed in the sec-
tions below.
VOLTAGE-CONTROLLED OSCILLATOR (VCO)
When a key on a keyboard is pressed, a signal is sent to the oscillator to acti-
vate it, followed by a specifi c control voltage (CV) to determine the pitch . The
CV that is sent is unique to the key that is pressed, allowing the oscillator to
determine the pitch it should reproduce. For this approach to work correctly,
Oscillator Filter
Amplifier
Modifiers
Output
FIGURE 1.5
Layout of a basic
synthesizer
PART 1
Technology and Theory
10
the circuitry in the keyboard and the oscillator must be incredibly precise in
order to prevent the tuning from drifting, so the synthesizer must be serviced
regularly. In addition, changes in external temperature and fl uctuations in the
power supply may also cause the oscillator’s tuning to drift.
This instability gives analogue synthesizers their charm and is the reason why
many purists will invest small fortunes in second-hand models rather than use
the latest DSP-based analogue emulations. Although, that said, if too much
detuning is present, it will be immediately evident and could become a major
problem! There is still an ongoing argument over whether it’s possible for DSP
oscillators to faithfully reproduce analogue-based synthesizers, but the argu-
ment in favour of DSP synthesizers is that they offer more waveforms and do
not drift too widely, and therefore prove more reliable in the long run.
In most early subtractive synthesizers the oscillator generated only three types
of waveforms: square, sawtooth and triangle waveforms. Today this number has
increased and many synthesizers now offer additional sine, noise, tri-saw, pulse
and numerous variable wave shapes as well.
Although these additional waveforms produce different sounds, they are all
based around the three basic wave shapes and are often introduced into syn-
thesizers to prevent mixing of numerous basic waveforms together, a task that
would reduce the number of oscillators.
For example, a tri-saw wave is commonly a sample of three sawtooth waves
blended together to produce a sound that is rich in harmonics, with the
advantage that the whole sound is contained in one oscillator. Without
this waveform it would take three oscillators to recreate this sound, which
could be beyond the capabilities of the synthesizer. Even if the synthesizer
could utilize three oscillators to produce this one sound, the number of
available oscillators would be reduced. Subsequently, while there are numer-
ous oscillator waves available, knowledge of only the following six types is
required.
The Sine Wave
A sine wave is the simplest wave shape and
is based on the mathematical sine function
(Figure 1.6 ). A sine wave consists of the
fundamental frequency alone and does not
contain harmonics. This means that they
are not suitable for sole use in a subtrac-
tive sense, because if the fundamental is
removed no sound is produced (and there
are no harmonics upon which the modifi -
ers could act). Consequently, the sine wave
is used independently to create sub-basses
Amplitude
Time
FIGURE 1.6
A sine wave
The Science of Synthesis
CHAPTER 1 11
or whistling timbres or is mixed with other waveforms to add extra body or
bottom end to a sound.
The Square Wave
A square wave is the simplest waveform for an electrical circuit to generate
because it exists in only two states: high and low ( Figure 1.7 ). This wave pro-
duces only odd harmonics resulting in a mel-
low, hollow sound. This makes it particularly
suitable for emulating wind instruments,
adding width to strings and pads, or for the
creation of deep, wide bass sounds.
The Pulse Wave
Although pulse waves are often confused with
square waves, there is a signifi cant difference
between the two ( Figure 1.8 ). Unlike a square
wave, a pulse wave allows the width of the
high and low states to be adjusted, thereby
varying the harmonic content of the sound.
Today it is unusual to see both square and
pulse waves featured in a synthesizer. Rather
the square wave offers an additional control
allowing you to vary the width of the pulses.
The benefi t of this is that reductions in the
width allow you to produce thin reed-like tim-
bres along with the wide, hollow sounds cre-
ated by a square wave.
The Sawtooth Wave
A sawtooth wave produces even and odd har-
monics in series and therefore produces a
bright sound that is an excellent starting point
for brassy, raspy sounds ( Figure 1.9 ). It’s also
suitable for creating the gritty, bright sounds
needed for leads and raspy basses. Because of
its harmonic richness, it is often employed in
sounds that will be fi lter swept.
The Triangle Wave
The triangle wave shape features two linear
slopes and is not as harmonically rich as a
sawtooth wave since it only contains odd
Amplitude
Time
FIGURE 1.7
A square wave
Amplitude
Time
FIGURE 1.8
A pulse wave
Amplitude
Time
FIGURE 1.9
A sawtooth wave
PART 1
Technology and Theory
12
harmonics (partials) ( Figure 1.10 ). Ideally,
this type of waveform is mixed with a sine,
square or pulse wave to add a sparkling
or bright effect to a sound and is often
employed on pads to give them a glittery
feel.
The Noise Wave
Noise waveforms are unlike the other fi ve
waveforms because they create a random
mixture of all frequencies rather than actual
tones ( Figure 1.11 ). Noise waveforms can be
pink’ or ‘white’ depending on the energy of
the mixed frequencies they contain. White
noise contains equal amounts of energy at
every frequency and is comparable to radio
static, while pink noise contains equal
amounts of energy in every musical octave
and therefore we perceive it to produce a
heavier, deeper hiss.
Noise is useful for generating percussive
sounds and was commonly used in early
drum machines to create snares and hand-
claps. Although this remains its main use,
it can also be used for simulating wind or
sea effects, for producing breath effects in wind instrument timbres or for
producing the typical trance leads.
CREATING MORE COMPLEX WAVEFORMS
Whether oscillators are created by analogue or DSP circuitry, listening to indi-
vidual oscillators in isolation can be a mind-numbing experience. To create
interesting sounds, a number of oscillators should be mixed together and used
with the available modulation options.
This is achieved by fi rst mixing different oscillator waveforms together and then
detuning them all or just those that share the same waveforms so that they
are out of phase from one another, resulting in a beating effect. Detuning is
accomplished using the detune parameter on the synthesizer, usually by odd
rather than even numbers. This is because detuning by an even number intro-
duces further harmonic content that may mirror the harmonics already pro-
vided by the oscillators, causing the already present harmonics to be summed
together.
It should be noted here that there is a limit to the level that oscillators can
be detuned from one another. As previously discussed, oscillators should be
Amplitude
Time
FIGURE 1.10
A triangle wave
Amplitude
Time
FIGURE 1.11
A noise wave
The Science of Synthesis
CHAPTER 1 13
detuned so that they beat, but if the speed of these beats is increased by any
more than 20 Hz the oscillators separate, resulting in two noticeably different
sounds. This can sometimes be used to good effect if the two oscillators are to
be mixed with a timbre from another synthesizer because the additional tim-
bre can help to fuse the two separate oscillators. As a general rule of thumb, it
is unusual to detune an oscillator by more than an octave.
Additional frequencies can also be added into a signal using ring modulation
and sync controls. Oscillator sync, usually found within the oscillator section
of a synthesizer, allows a number of oscillators ’ cycles to be synced to one
another. Usually all oscillators are synced to the fi rst oscillators’ cycle; hence,
no matter where in the cycle any other oscillator is, when the fi rst starts its cycle
again the others are forced to begin again too.
For example, if two oscillators are used, with both set to a sawtooth wave and
detuned by 5 cents (one-hundredth of a tone), every time the fi rst oscillator
restarts its cycle so too will the second, regardless of the position in its own
cycle. This tends to produce a timbre with no harmonics and can be ideal for
creating big, bold leads. Furthermore, if the fi rst oscillator is unchanged and
pitch bend is applied to the second to speed up or slow its cycle, screaming
lead sounds typical of the Chemical Brothers are created as a consequence of
the second oscillator fi ghting against the syncing with the fi rst.
After the signals have left the oscillators, they enter the mixer section where the
volume of each oscillator can be adjusted and features such as ring modulation
can be applied to introduce further harmonics. (The ring modulation feature
can sometimes be found within the oscillator section but is more commonly
located in the mixer section, directly after the oscillators). Ring modulation
works by providing a signal that is the sum and difference compound of two
signals (while also removing the original tones). Essentially, this means that
both signals from a two-oscillator synthesizer enter the ring modulator and
come out from the other end as one combined signal with no evidence of the
original timbre remaining.
As an example, if one oscillator produces a signal frequency of 440 Hz (A4 on a
keyboard) and the second produces a frequency of 660 Hz (E5 on a keyboard),
the frequency of the fi rst oscillator is subtracted from the second.
660 440 220Hz Hz Hz A3−=()
Then the fi rst oscillator’s frequency is added to that of the second.
660 440 1100Hz Hz Hz C#6+= ()
Based on this example, the difference of 220 Hz provides the fundamental fre-
quency while the sum of the two signals, 1100 Hz, results in a fi fth harmonic
overtone. When working with synthesizer, though, this calculation is rarely per-
formed. This result is commonly achieved by ring modulating the oscillators
PART 1
Technology and Theory
14
together at any frequency and then tuning the oscillator. Ring modulation is
typically used in the production of metallic-type effect (ring modulators were
used to create the Dalek voice from Dr Who) and bell-like sounds. If ring mod-
ulation is used to create actual pitched sounds, a large number of in-harmonic
overtones are introduced into the signal creating dissonant, unpitched results.
The option to add noise may also be included in the oscillator’s mix section to
introduce additional harmonics, making the signal leaving the oscillator/mix sec-
tion full of frequencies that can then be shaped further using the options available.
VOLTAGE-CONTROLLED FILTERS
Following the oscillator’s mixer section are the fi lters for sculpting the pre-
viously created signal. In the synthesizer world, if the oscillator’s signal is
thought of as a piece of wood that is yet to be carved, the fi lters are the ham-
mer and chisels that are used to shape it. Filters are used to chip away pieces of
the original signal until a rough image of the required sound remains.
This makes fi lters the most vital element of any subtractive synthesizer because if
the available fi lters are of poor quality, few sound sculpting options will be avail-
able and it will be impossible to create the sound you require. Indeed, the choice
of fi lters combined with the oscillator ’s waveforms is often the reason why specifi c
synthesizers must be used to recreate certain ‘classic’ dance timbres.
The most common fi lter used in basic subtractive synthesizers is a low-pass fi l-
ter. This is used to remove frequencies above a defi ned cut-off point. The effect
is progressive, meaning that more frequencies are removed from a sound, the
further the control is reduced, starting with the higher harmonics and gradu-
ally moving to the lowest. If this fi lter cut-off point is reduced far enough, all
harmonics above the fundamental can be removed, leaving just the fundamen-
tal frequency. While it may appear senseless to create a bright sound with oscil-
lators only to remove them later with a fi lter, there are several reasons why you
may wish to do this.
Using a variable fi lter on a bright sound allows you to determine the
colour of the sound much more precisely than if you tried to create the
same effect using oscillators alone.
This method enables you to employ real-time movement of a sound.
This latter movement is an essential aspect of sound design because we natu-
rally expect dynamic movement of sound throughout the length of the note.
Using our previous example of a piano string being struck, the initial sound
is very bright, becoming duller as it dies away. This effect can be simulated by
opening the fi lter as the note starts and then gradually sweeping the cut-off fre-
quency down to create the effect of the note dying away.
Notably, when using this effect, frequencies that lie above the cut-off point are
not attenuated at right angles to the cut-off frequency; therefore, the rate at
which they die away will depend on the transition period. This is why different
The Science of Synthesis
CHAPTER 1 15
lters that essentially perform the same function can make beautiful sweeps,
whilst others can produce quite uneventful results ( Figure 1.12 ).
When a cut-off point is designated, small quantities of the harmonics that lie
above this point are not removed completely and are instead attenuated by a
certain degree. The degree of attenuation is dependent on the transition band
of the fi lter being used. The gradient of this transition is important because it
defi nes the sound of any one particular fi lter. If the slope is steep, the fi lter is
said to be ‘sharp’ and if the slope is more gradual the fi lter is said to be ‘soft’.
To fully understand the action of this transition, some prior knowledge of the
electronics involved in analogue synthesizer is required.
When the fi rst analogue synthesizers appeared in the 1960s, different voltages
were used to control both the oscillators and the fi lters. Any harmonics pro-
duced by the oscillators could be removed gradually by physically manipu-
lating the electrical current. This was achieved using a resistor (to reduce the
voltage) and a capacitor (to store a voltage), a system that is often referred to
as a resistor –capacitor (RC) circuit. Because a single RC circuit produces a 6 dB
transition, the attenuation increases by 6 dB every time a frequency is doubled.
One RC element creates a 6 dB per octave 1-pole fi lter that is very similar to the
gentle slope created by a mixing desks EQ. Consequently, manufacturers soon
implemented additional RC elements into their designs to create 2-pole fi l-
ters, which attenuated 12 dB per octave, and 4-pole fi lters, to provide 24 dB per
octave attenuation. Because 4-pole fi lters attenuate 24 dB per octave, making
substantial changes to the sound, they tend to sound more synthesized than
sounds created by a 2-pole fi lter; so it’s important to decide which transition
period is best suited to the sound. For example, if a 24 dB fi lter is used to sweep
a pad, it will result in strong attenuation throughout the sweep, while a 12 dB
will create a more natural fl owing movement ( Figure 1.13 ).
If there is more than one of these available, some synthesizers allow them to be
connected in series or parallel, which gives more control over the timbre from
the oscillators. This means that two 12 dB lters could be summed together to
produce a 24 dB transition, or one 24 dB lter could be used in isolation for
aggressive tonal adjustments with the following 12 dB lter used to perform a
real-time fi lter sweep.
Legend
Fundamental
Excluded harmonics
Included harmonics
F
F (harmonics/frequency)
Low pass
FIGURE 1.12
Action of the low-pass
lter
PART 1
Technology and Theory
16
Non-filtered harmonics
Harmonics filtered at 12dB
Fundamental
24dB filter 12dB filter
24dB
12dB
Transition slopes
Harmonics
&Harmonics filtered at 24dB
Legend
Figure 1.13
The difference between 12 dB and 24 dB slopes
High pass
F (harmonics/frequency)
Legend
Fundamental
Excluded harmonics
Included harmonics
F
FIGURE 1.14
Action of a high-pass
lter
Although low-pass fi lters are the most commonly used type, there are numer-
ous variations including high pass, band pass, and notch and comb. These
utilize the same transition periods as the low-pass fi lter but each has a widely
different effect on the sound ( Figure 1.14 ).
A high-pass fi lter has the opposite effect to a low-pass fi lter, rst removing the
low frequencies from the sound and gradually moving towards the highest.
This is less useful than the low-pass fi lter because it effectively removes the fun-
damental frequency of the sound, leaving only the fi zzy harmonic overtones.
Because of this, high-pass fi lters are rarely used in the creation of instruments
The Science of Synthesis
CHAPTER 1 17
and are predominantly used to create effervescent sound effects or bright tim-
bres that can be laid over the top of another low-pass sound to increase the
harmonic content.
The typical euphoric trance leads are a good example of this, as they are often cre-
ated from a tone with the fundamental overlaid with numerous other tones that
have been created using a high-pass fi lter. This prevents the timbre from becom-
ing too muddy as a consequence of stacking together fundamental frequencies. In
both remixing and dance music, it’s commonplace to run a high-pass fi lter over an
entire mix to eliminate the lower frequencies, creating an effect similar to a transis-
tor radio or a telephone. By reducing the cut-off control, gradually or immediately,
the track morphs from a thin sound to a fatter one, which can produce a dramatic
effect in the right context.
If high- and low-pass fi lters are connected in series, then it’s possible to cre-
ate a band-pass, or band-select, fi lter. These permit a set of frequencies to pass
unaltered through the fi lter while the frequencies either side of the two fi lters
are attenuated. The frequencies that pass through unaltered are known as the
bandwidth’ or the ‘band pass ’ of the fi lter, and clearly, if the low pass is set to
attenuate a range of frequencies that are above the current high-pass setting, no
frequencies will pass through and no sound is produced.
Band-pass fi lters, like high-pass fi lters, are often used to create timbres con-
sisting of fi zzy harmonics ( Figure 1.15 ). They can also be used to determine
the frequency content of a waveform, as by sweeping through the frequencies
each individual harmonic can be heard. Because this type of fi lter frequently
removes the fundamental, it is often used as the basis of sound effects or lo-fi
and trip-hop timbres or to create very thin sounds that will form the basis of
sound effects.
Although band-pass fi lters can be used to thin a sound, they should not be
confused with band-reject fi lters, which can be used for a similar purpose.
Band-reject fi lters, often referred to as notch fi lters, attenuate a selected range
of frequencies effectively creating a notch in the sound – hence the name –
and usually leave the fundamental unaffected. This type of fi lter is handy for
Legend
Fundamental
Excluded harmonics
Included harmonics
F
Band select
F (harmonics/frequency)
FIGURE 1.15
Action of the band-
pass lter
PART 1
Technology and Theory
18
scooping out frequencies, thinning out a sound while leaving the fundamental
intact, making them useful for creating timbres that contain a discernable pitch
but do not have a high level of harmonic content ( Figure 1.16 ).
One fi nal form of fi lter is the comb fi lter. With these, some of the samples
entering the fi lter are delayed in time and the output is then fed back into
the fi lter to be reprocessed to produce the results, effectively creating a comb
appearance, hence the name. Using this method, sounds can be tuned to
amplify or reduce specifi c harmonics based on the length of the delay and the
sample rate, making it useful for creating complex sounding timbres that can-
not be accomplished any other way. Because of the way they operate, however,
it is rare to fi nd these featured on a synthesizer and are usually available only
as a third-party effect.
As an example, if a 1 kHz signal is put through the fi lter with a 1 ms delay, the
signal will result in phase because 1 ms is coincident with the inputted signal,
equalling one. However, if a 500 Hz signal with a 1 ms delay were used instead,
it would be half of the period length and so would be shifted out of phase
by 180 °, resulting in a zero. It’s this constructive and deconstructive period
that creates the continual bump then dip in harmonics, resulting in a comb-
like appearance when represented graphically, as in Figure 1.17 . This method
applies to all frequencies, with integer multiples of 1 kHz producing ones and
odd multiples of 500 Hz (1.5, 2.5, 3.5 kHz etc.) producing zeros. The effect of
Legend
Fundamental
Excluded harmonics
Included harmonics
F
F (harmonics/frequency)
Notch
FIGURE 1.16
Action of the notch
lter
Legend
Fundamental
Excluded harmonics
Included harmonics
F
F (harmonics/frequency)
Comb
FIGURE 1.17
Action of the comb
lter
The Science of Synthesis
CHAPTER 1 19
using this fi lter can at best be described as highly resonant, and forms the basis
of fl anger effects; therefore, its use is commonly limited to sound design rather
than the more basic sound sculpting.
One fi nal element of sound manipulation in a synthesizer’s fi lter section is the
resonance control. Also referred to as peak, this refers to the amount of the
output of the fi lter that is fed back directly into the input, emphasizing any fre-
quencies that are situated around the cut-off frequency. This has a similar effect
to employing a band-pass fi lter at the cut-off point, effectively creating a peak.
Although this also affects the fi lter’s transition period, it is more noticeable at
the actual cut-off frequency than anywhere else. Indeed, as you sweep through
the cut-off range the resonance follows the curve, continually peaking at the
cut-off point. In terms of the fi nal sound, increasing the resonance makes the
lter sound more dramatic and is particularly effective when used in conjunc-
tion with low-pass fi lter sweeps ( Figure 1.18 ).
On many analogue and DSP-analogue-modelled synthesizers, if the resonance
is turned up high enough it will feed back on itself. As more and more of the
signal is fed back, the signal is exaggerated until the fi lter breaks into self-
oscillation. This produces a sine wave with a frequency equal to that of the set
cut-off point and is often a purer sine wave than that produced by the oscilla-
tors. Because of this, self-oscillating fi lters are commonly used to create deep,
powerful sub-basses that are particularly suited to the drum ‘n’ bass and rap
genres.
Notably, some fi lters may also feature a saturation parameter which essentially
overdrives the fi lters. If applied heavily, this can be used to create distortion
effects, but more often it’s used to thicken out timbres and add even more har-
monics and partials to the signal to create rich sounding leads or basses.
The keyboard’s pitch can also be closely related to the action of the fi lters,
using a method known as pitch tracking, keyboard scaling or more frequently
‘key follow . On many synthesizers the depth of this parameter is adjustable,
F (harmonics/frequency)
Resonance
FIGURE 1.18
The effect of resonance
PART 1
Technology and Theory
20
allowing you to determine how much or how little the fi lter should follow
the pitch.
When this parameter is set to its neutral state (neither negative nor positive), as
a note is played on the keyboard the cut-off frequency tracks the pitch and each
note is subjected to the same level of fi ltering. If this is used on a low-pass fi l-
ter, for example, the fi lter setting remains fi xed, so as progressively higher notes
are played fewer and fewer harmonics will be present in the sound, making
the timbre of the higher notes mellower than that of the lower notes. If the key
follow parameter is set to positive, the higher notes will have a higher cut-off
frequency and the high notes will remain bright ( Figure 1.19 ). If, on the other
hand, the key follow parameter is set to negative, the higher notes will lower
the cut-off frequency, making the high notes even mellower than when key fol-
low is set to its neutral state. Key follow is useful for recreating real instruments
such as brass, where the higher notes are often mellower than the lower notes,
and is also useful on complex bass lines that jump over an octave, adding fur-
ther variation to a rhythm.
VOLTAGE-CONTROLLED AMPLIFIER (VCA)
Once the fi lters have sculpted a sound, the signal then moves into the fi nal stage
of synthesizer: the amplifi er. When a key is pressed, rather than the volume
F123456789
Low notes
10 F123456789
10
F123456789
10
F123456789
10
High notes
Key follow difference
when set to negative
When key follow is set to negative
Low notes
High notes
Key follow difference
when set to positive
When key follow is set to positive
FIGURE 1.19
The effect of fi lter key
follow
The Science of Synthesis
CHAPTER 1 21
rising immediately to its maximum and falling to zero when released, an ‘enve-
lope generator ’ is employed to emulate the nuances of real instruments.
Few, if any, acoustic instruments start and stop immediately. It takes a fi nite
amount of time for the sound to reach its amplitude and then decay away to
silence again; thus, the ‘envelope generator ’ – a feature of all synthesizers – can
be used to shape the volume with respect to time. This allows you to control
whether a sound starts instantly the moment a key is pressed or builds up gradu-
ally and how the sound dies away (quickly or slowly) when the key is released.
These controls usually comprise four sections called attack, decay, sustain, and
release (ADSR), each of which determines the shaping that occurs at certain
points during the length of a note. An example of this is shown in Figure 1.20 .
Attack: The attack control determines how the note starts from the point
when the key is pressed and the period of time it takes for the sound
to go from silence to full volume. If the period set is quite long, the
sound will ‘fade in ’, as if you are slowly turning up a volume knob. If the
period set is short, the sound will start the instant a key is pressed. Most
instruments utilize a very short attack time.
Decay: Immediately after a note has begun it may initially decay in vol-
ume. For instance, a piano note starts with a very loud, percussive part
but then drops quickly to a lower volume while the note sustains as the
key is held down. The time the note takes to fade from the initial peak at
the attack stage to the sustain level is known as the ‘decay time ’.
Sustain: The sustain period occurs after the initial attack and decay peri-
ods and determines the volume of the note while the key is held down.
This means that if the sustain level is set to maximum, any decay period
Attack Decay
Sustain
Release
Time
Key on Key off
Amplitude
FIGURE 1.20
The ADSR envelope
PART 1
Technology and Theory
22
will be ineffective, because at the attack stage the volume is at maximum
and so there is no level to decay down to. Conversely, if the sustain
level were set to zero, the sound peaks following the attack period and
will fade to nothing even if you continue to hold down the key. In this
instance, the decay time determines how quickly the sound decays down
to silence.
Release: The release period is the time it takes for the sound to fade from
the sustain level to silence after the key has been released. If this is set to
zero, the sound will stop the instant the key is released, while if a high
value is set the note will continue to sound, fading away as the key is
released.
Although ADSR envelopes are the most common, there are some subtle varia-
tions such as attack –release (AR), time –attack–delay –sustain–release (TADSR),
and attack –delay –sustain–time–release (ADSTR). Because there are no decay or
sustain elements contained in most drum timbres, AR envelopes are often used
on drum synthesizers. They can also appear on more economical synthesizers
simply because the AR parameters are regarded as having the most signifi cant
effect on a sound, making them a basic requirement. Both TADSR and ADSTR
envelopes are usually found on more expensive synthesizers. With the addi-
tional period, T (time), in TADSR, for instance, it is possible to set the amount
of time that passes before the attack stage is reached ( Figure 1.21 ).
It’s also important to note that not all envelopes offer linear transitions,
meaning that the attack, decay and release stages will not necessarily consist
Key on Key off
Amplitude
Time Attack Decay Release
Sustain
Time
FIGURE 1.21
The TADSR envelope
The Science of Synthesis
CHAPTER 1 23
entirely of a straight line as it is shown in Figure 1.22 . On some synthesizers
these stages may be concave or convex, while other synthesizers may allow
you to state whether the envelope stages should be linear, concave, or convex.
The differences between the linear and the exponential envelopes are shown in
Figure 1.22 .
MODIFIERS
Most synthesizers also offer additional tools for manipulating sound in the
form of modulation sources and destinations. Using these tools, the response
or movement of one parameter can be used to modify another totally indepen-
dent parameter, hence the name ‘modifi ers ’.
The number of modifi ers available, along with the destinations they can affect,
is entirely dependent on the synthesizer. Many synthesizers feature a number
of envelope generators that allow the action of other parameters alongside the
amplifi er to be controlled.
For example, in many synthesizers, an envelope may be used to modify the fi l-
ter’s action and by doing so you can make tonal changes to the note while it
plays. A typical example of this is the squelchy bass sound used in most dance
music. By having a zero attack, short decay and zero sustain level on the enve-
lope generator, a sound that starts with the fi lter wide open before quickly
sweeping down to fully closed is produced. This movement is archetypal to
most forms of dance music but does not necessarily have to be produced by
envelopes. Instead, some synthesizers offer one-shot low-frequency oscilla-
tors (LFOs) which can be used in the envelope’s place. For instance, by using
a triangle waveform LFO to modulate the amp, there is a slow rise in volume
before a slowdrop again.
Sustain
Attack Decay Release
Convex
(black line)
Concave
(grey line)
Time
Linear
(dotted line)
FIGURE 1.22
Linear and exponential envelopes
PART 1
Technology and Theory
24
LOW-FREQUENCY OSCILLATOR
LFOs produce output frequencies in much the same way as VCOs. The differ-
ence is that a VCO produces an audible frequency (within the 20 Hz–20 kHz
range) while an LFO produces a signal with a relatively low frequency that is
inaudible to the human ear (in the range 1 –10 Hz).
The waveforms an LFO can utilize depend entirely upon the synthesizer in
question, but they commonly employ sine, saw, triangle, square, and sample
and hold waveforms. The sample and hold waveform is usually constructed
with a randomly generated noise waveform that momentarily freezes every few
samples before beginning again.
LFOs should not be underestimated because they can be used to modulate
other parameters, known as ‘destination’, to introduce additional movement
into a sound. For instance, if an LFO is set to a relatively high frequency, say
5 Hz, to modulate the pitch of a VCO, the pitch of the oscillator will rise and
fall according to the speed and shape of the LFO waveform and an effect simi-
lar to that of vibrato is generated. If a sine wave is used for the LFO, then it will
essentially create an effect similar to that of a wailing police siren. Alternatively,
if this same LFO is used to modulate the fi lter cut-off, then the fi lter will open
and close at a speed determined by the LFO, while if it were used to modulate
an oscillator’s volume, it would rise and fall in volume recreating a tremolo
effect.
This means that an LFO must have an amount control (sometimes known as
depth) for varying how much the LFO’s waveform augments the destination,
a rate control to control the speed of the LFO’s waveform cycles, and a fade-in
control in some. The fade-in control adjusts how quickly the LFO begins to
affect the waveform after a key has been pressed. An example of this is shown
in Figure 1.23 .
The LFO on more capable synthesizers may also have access to its own enve-
lope. This gives control of the LFO’s performance over a specifi ed time period,
allowing it not only to fade in after a key has been pressed but also to decay,
sustain, and fade away gradually. It is worth noting, however, that the desti-
nations an LFO can modulate are entirely dependent on the synthesizer being
used. Some synthesizers may only allow LFOs to modulate the oscillator’s pitch
and the fi lter, while others may offer multiple destinations and more LFOs.
Obviously, the more LFOs and destinations that are available, the more creative
options you will have at your disposal.
If required, further modulation can be applied with an attached controller
keyboard or the synthesizer itself in the form of two modulation wheels. The
rst, pitch bend, is hard-wired and provides a convenient method of apply-
ing a modulating CV to the oscillator(s). By pushing the wheel away from
you, you can bend the pitch (i.e. frequency) of the oscillator up. Similarly, you
can bend the pitch down by pulling the wheel towards you. This wheel is nor-
mally spring loaded to return to the centre position, where no bend is applied,
The Science of Synthesis
CHAPTER 1 25
if you let go of it, and is commonly used in synthesizer solos to give additional
expression. The second wheel, modulation, is freely assignable and offers a
convenient method of controlling any on-board parameters, such as the level
of the LFO signal sent to the oscillator, fi lter or VCA or to control the fi lter cut-
off directly. Again, whether this wheel is assignable will depend on the manu-
facturer of the synthesizer.
On some synthesizers the wheels are hard coded to only allow oscillator modu-
lation (for a vibrato effect), while some others do not have a separate modula-
tion wheel and instead the pitch bend lever can be pushed forward to produce
LFO modulation.
PRACTICAL APPLICATIONS
While there are other forms of synthesis – which will be discussed later in this
chapter – most synthesizers used in the production of dance music are of an
analogue/subtractive nature; therefore, it is vital that the user grasps the con-
cepts behind all the elements of subtractive synthesis and how they can work
together to produce a fi nal timbre. With this in mind, it is sensible to experi-
ment with a short example to aid in the understanding of the components.
Using the synthesizer of your choice, clear all the current settings so that you
start from nothing. On many synthesizers, this is known as ‘initializing a
patch’, so it may be a button labelled ‘init’, ‘init patch ’ or similar.
Begin by pressing and holding C3 on your synthesizer, or alternatively con-
trolling the synthesizer via MIDI programme in a continual note . If not, place
something heavy on C3. The whole purpose of this exercise is to hear how the
Sine wave LFO
LFO fade-in time
FIGURE 1.23
LFO fade-in
PART 1
Technology and Theory
26
sound develops as you begin to modify the controls of the synthesizer, so the
note needs to play continually.
Select sawtooth waves for two oscillators; if there is a third oscillator that you
cannot turn off, choose a triangle for this third oscillator. Next, detune one
sawtooth from the other until the timbre begins to thicken. This is a tutorial
to grasp the concept of synthesis, so keep detuning until you hear the oscil-
lators separate from one another and then move back until they become one
again and the timbre is thickened out. Generally speaking, detuning of 3 cents
should be ample but do not be afraid to experiment – this is a learning process.
If you are using a triangle wave, detune this against the two saws and listen to
the results. Once you have a timbre you feel you can work with, move onto the
next step.
Find the VCA envelope and start experimenting. You will need to release C3
and then press it again so you can hear the effect that the envelope is having
on the timbre. Experiment with these envelopes until you have a good grasp
on how they can adjust the shape of a timbre; once you’re happy you have an
understanding, apply a fast attack with a short decay, medium sustain and a
long release. As before, for this next step you will need to keep C3 depressed.
Find the fi lter section, and experiment with the fi lter settings. Start by using
a high-pass fi lter with the resonance set around midway and slowly turn the
lter cut-off control. Note how the fi lter sweeps through the sound, removing
the lower frequencies fi rst, slowly progressing to the higher frequencies. Also
experiment with the resonance by rotating it to move upwards and downwards
and note how this affects the timbre. Do the same with the notch and band
pass etc. (if the synthesizer has these available) before fi nally moving to the
low pass. Set the low-pass fi lter quite low, along with a low-resonance setting
you should now have a static buzzing timbre.
The timbre is quite monotonous, so use the fi lter envelope to inject some life
into the sound. This envelope works on exactly the same principles as the VCA,
with the exception that it will control the fi lter’s movement. Set the fi lter’s
envelope to a long attack and decay, but use a short release and no sustain and
set the fi lter envelope to maximum positive modulation. If the synthesizer has
a fi lter key follow, use this as it will track the pitch of the note being played and
adjust itself. Now try depressing C3 to hear how the fi lter envelope controls the
lter, essentially sweeping through the frequencies as the note plays.
Finally, to add some more excitement to the timbre, fi nd the LFO section.
Generally, the LFO will have a rotary control to adjust the rate (speed), a selec-
tor switch to choose the LFO waveform, a depth control and a modulation des-
tination. Choose a triangle wave for the LFO waveform, Hold down C3 on the
synthesizer’s keyboard, turn the LFO depth control up to maximum and set the
LFO destination to pitch. As before, hold down the C3 key and slowly rotate
the LFO rate (speed) to hear the results. If you have access to a second LFO,
try modulating the fi lter cut-off with a square wave LFO, set the LFO depth to
maximum and experiment with the LFO rate again.
The Science of Synthesis
CHAPTER 1 27
If you would like to experiment more with synthesis to help get to grips with
the principles, jump to Chapter 4 for further information on programming spe-
cifi c synthesizer timbres. Note, however, that different synthesizers will produce
timbres differently and some are more suited to reproducing particular timbres
than others.
OTHER SYNTHESIS METHODS
Frequency Modulation (FM)
FM is a form of synthesizer developed in the early 1970s by Dr John Chowning
of Stanford University, then later developed further by Yamaha, leading to the
release of the now-legendary DX7 synthesizer: a popular source of bass sounds
for numerous dance musicians.
Unlike analogue, FM synthesizer produces sound by using operators, which
are very similar to oscillators in an analogue synthesizer but can only produce
simple sine waves. Sounds are generated by using the output of the fi rst opera-
tor to modulate the pitch of the second, thereby introducing harmonics. Like
an analogue synthesizer, each FM voice requires a minimum of two oscilla-
tors in order to create a basic sound, but because FM only produces sine waves
the timbre produced from just one carrier and modulator isn’t very rich in
harmonics.
In order to remedy this, FM synthesizers provide many operators that can be
confi gured and connected in any number of ways. Many will not produce
musical results, so to simplify matters various algorithms are used. These algo-
rithms are preset as combinations of modulator and carrier routings. For exam-
ple, one algorithm may consist of a modulator modulating a carrier, which in
turn modulates another carrier, before modulating a modulator that modulates
a carrier to produce the overall timbre. The resulting sound can then be shaped
and modulated further using LFOs, fi lters and envelopes using the same sub-
tractive methods as in any analogue synthesizer.
This means that it should also be possible to emulate FM synthesizer in an
analogue synthesizer with two oscillators, where the fi rst oscillator acts as a
modulator and the second acts as a carrier. When the keyboard is played, both
oscillators produce their respective waveforms with the frequency dictated by
the particular notes that were pressed. If the fi rst oscillator’s output is routed
into the modulation input of the second oscillator and further notes are played
on the keyboard, both oscillators play their respective notes but the pitch of the
second oscillator will change over time with the frequency of the fi rst, essen-
tially creating a basic FM synthesizer. Although this is, in effect, FM, it is usu-
ally called ‘cross modulation ’ in analogue synthesizers.
Due to the nature of FM, many of the timbres created are quite metallic and
digital in character, particularly when compared to the warmth generated by the
drifting of analogue oscillators. Also due to the digital nature of FM synthesizer,
PART 1
Technology and Theory
28
the facia generally contains few real-time controllers. Instead, numerous but-
tons adorn the front panel forcing you to navigate and adjust any parameters
through a small LCD display.
Notably, although both FM and analogue synthesizers were originally used to
reproduce realistic instruments, neither can fabricate truly realistic timbres. If
the goal of the synthesizer system is to recreate the sound of an existing instru-
ment, this can generally be accomplished more accurately using digital sample-
based techniques.
SAMPLES AND SYNTHESIS
Unlike analogue or FM, sample synthesizer utilizes samples in place of the
oscillators. These samples, rather than consisting of whole instrument sounds,
also contain samples of the various stages of a real instrument along with the
sounds produced by normal oscillators. For instance, a typical sample-based
synthesizer may contain fi ve different samples of the attack stage of a piano,
along with a sample of the decay, sustain and release portions of the sound.
This means that it is possible to mix the attack of one sound with the release of
another to produce a complex timbre.
Commonly, up to four of these individual ‘tones’ can be mixed together to pro-
duce a timbre and each of these individual tones can have access to numer-
ous modifi ers including LFOs, fi lters and envelopes. This obviously opens up
a whole host of possibilities not only for emulating real instruments, but also
for creating complex sounds. This method of synthesis has become the de facto
standard for any synthesizer producing realistic instruments . By combining
both samples of real-world sounds with all the editing features and function-
ality of analogue synthesizers, they can offer a huge scope for creating both
realistic and synthesized sounds.
GRANULAR SYNTHESIS
One fi nal form of synthesizer that has started to make an appearance with
the evolution of technology is granular synthesizer. It is rare to see a granu-
lar synthesizer employed in hardware synthesizers due to its complexity, but
software synthesizers are being developed for the public market that utilize it.
Essentially, it works by building up sounds from a series of short segments of
sounds called ‘grains’. This is best compared to the way that a fi lm projector
operates, where a series of still images, each slightly different from the last, are
played sequentially at a rate of around 25 pictures per second, fooling the eyes
and brain into believing there is a smooth continual movement.
A granular synthesizer operates in the same manner with tiny fragments of
sound rather than still images. By joining a number of these grains together, an
overall tone is produced that develops over a period of time. To do this, each
grain must be less than 30 ms in length as, generally speaking, the human ear
is unable to determine a single sound if they are less than 30 –50 ms apart. This
The Science of Synthesis
CHAPTER 1 29
also means that a certain amount of control has to be offered over each grain.
In any one sound there can be anything from 200 to 1000 grains, which is the
main reason why this form of synthesizer appears mostly in the form of soft-
ware. Typically, a granular synthesizer will offer most, but not necessarily all, of
the following fi ve parameters:
Grain length: This can be used to alter the length of each individual
grain. As previously mentioned, the human ear can differentiate between
two grains if they are more than 30 –50 ms apart, but many granular syn-
thesizers usually go above this range, covering 20 –100 ms. By setting this
length to a higher value, it’s possible to create a pulsing effect.
Density: This is the percentage of grains that are created by the synthe-
sizer. Generally, it can be said that the more the grains created, the more
complex a sound will be, a factor that is also dependent on the grain
shape.
Grain shape: Commonly, this offers a number between 0 and 200 and
represents the curve of the envelopes. Grains are normally enveloped so
that they start and fi nish at zero amplitude, helping the individual grains
mix together coherently to produce the overall sound. By setting a lon-
ger envelope (a higher number) two individual grains will mix together,
which can create too many harmonics and often result in the sound
exhibiting lots of clicks as it fades from one grain to the other.
Grain pan: This is used to specify the location within the stereo image
where each grain is created. This is particularly useful for creating tim-
bres that inhabit both speakers.
Spacing: This is used to alter the period of time between each grain.
If the time is set to a negative value, the preceding grain will continue
through the next created grain. This means that setting a positive value
inserts space between each grain; however, if this space is less than
30 ms, the gap will be inaudible.
The sound produced with granular synthesizer depends on the synthesizer in
question. Usually, the grains consist of single frequencies with specifi c wave-
forms or occasionally they are formed from segments of samples or noise that
have been fi ltered with a band-pass fi lter. Thus, the constant change of grains
can produce sounds that are both bright and incredibly complex, resulting in a
timbre that’s best described as glistening. After creating this sound by combing
the grains, the whole sound can be shaped by using envelopes, fi lters and LFOs.
This page intentionally left blank
Compression, Processing and Effects
CHAPTER 2 31
31
31
Armed with the basic understanding of synthesis, we can examine the various pro-
cessors and effects that are available. This is because the deliberate abuse of these
processors and effects play a vital role in not only the sound design process but
also the requisite feel of the music, so it pays to understand what they are, how
they affect the audio and how a mixing desk can determine the outcome of the
effect. Consequently, this chapter concentrates on the behaviours of the different
processors and effects that are widely used in the design and production of dance
music, including reverb, chorus, phasers, fl angers, delay, EQ, distortion, gates, limit-
ers and, perhaps the most important of all, compressors.
Of all the effects and processors available, a compressor is possibly the most
vital tool to achieve that atypical sound heard on so many dance records, so a
thorough understanding of it is essential. Without compression, drums appear
wimpy in comparison to the chest thudding results heard in professionally
produced music, mixes can appear lacking in depth and basses, and vocals and
leads can lack any real presence. Despite its importance, however, the compres-
sor is the least understood processor of them all.
COMPRESSION THEORY
The whole reason a compressor was originally introduced was to reduce the
dynamic range of a performance, which is particularly vital when working with
any form of music. Whenever you record any sound into a computer, sampler
or recording device, you should aim to capture the loudest signal possible so
that you can avoid artifi cially increasing the volume afterwards. This is because
if you record a source that’s too low in volume and then attempt to artifi cially
increase it later, not only will it increase the volume of the recorded source, it’ll
also increase any background noise.
Compression, Processing
and Effects
CHAPTER 2
CHAPTER 2
Compression plays a major part of my sound. I have
them patched across every output of the desk
Armand Van Helden
PART 1
Technology and Theory
32
To prevent this, you need to record a signal as loud as possible but the problem
is that vocals and ‘real’ instruments have a huge dynamic range. In other words,
the vocals, for example, can be quiet in one part and suddenly become loud
in the next (especially when moving from verse to chorus). Consequently, it’s
impossible to set a good average recording level with so much dynamic move-
ment since if you set the recording level to capture the quiet sections, when
it becomes louder the recording will go into the red clip. Conversely, setting
the recorder so that the loud sections do not clip, any quieter sections will be
exposed to more background noise.
Of course, you could sit by the recording fader and increase or decrease the
recording levels depending on the section being recorded but this would mean
that you need lightening refl exes. Instead, it’s much easier to employ a com-
pressor to control the levels automatically. By routing the source sound through
a compressor and then into the recorder, you can set a threshold on the com-
pressor so that any sounds that exceed this are automatically pulled down in
volume, thus allowing you to record at a more substantial volume overall.
A compressor can also be used to control the dynamics of a sound while mixing.
For example, a dance track that uses a real bass guitar will have a fairly wide
dynamic range, even if it was compressed during the recording stage. This will
cause problems within a mix because if the volume is adjusted so that the loudest
parts fi t well within the mix, the quieter parts may disappear behind other instru-
mentation. Conversely, if the fader is set so that quieter sections can be heard over
other instruments, the loud parts could be too prominent. Using compression
more heavily on this sound during the mixing stage, the dynamic range can be
restricted, allowing the sound to sit better overall within the fi nal mix.
Although these are the key reasons why compressors were fi rst introduced, it
has further, far-reaching applications for the dance musician and a compres-
sor’s action has been abused to produce the typical dance sound.
Since the signals that exceed the threshold are reduced in gain, the parts that
do not exceed the threshold aren’t touched, so they remain at the same vol-
ume as they were before compression. In other words, the difference in volume
between the loudest and the quietest parts of the recording are reduced, which
means that any uncompressed signals will become louder relative to the com-
pressed parts. This effectively boosts the average signal level, which in turn not
only allows you to push the volume up further but also makes it sound louder
( Figures 2.1 and 2.2 ).
Note that after reducing the dynamic range of audio it may be perceived to
be louder without actually increasing the gain. This is because we determine
the overall volume of music from the average volume (measured in root mean
square, RMS), not from the transient peaks created by kick drums.
Nevertheless, applying heavy compression to certain elements of a dance mix
can change the overall character of the timbre, often resulting in a warmer,
smoother and rounder tone, a sound typical of most dance tracks around today.
Compression, Processing and Effects
CHAPTER 2 33
FIGURE 2.1
A drum loop waveform
before compression
FIGURE 2.2
A drum loop waveform
after compression
(note how the volume
difference (dynamics)
between instruments
has changed)
While there are numerous publications stating the ‘typical’ compression set-
tings to use, the truth is that there are no generic settings for any particular genre
of music and its use depends entirely on the timbres that have been used. For
instance, a kick drum sample from a record will require a different approach than
a kick drum constructed in a synthesizer, while if sampled from a CD, synthesizer
or fi lm different approaches are required again. Therefore, rather than attempt
PART 1
Technology and Theory
34
to dictate a set list of useless compression settings, you can achieve much bet-
ter results by knowing exactly what effect each control will have on a sound and
how these are used to acquire the sounds typical of each genre.
Threshold
The fi rst control on a compressor is the threshold which when touched upon
sets the signal level where the compressor will begin squashing the incoming
signal. These are commonly calibrated in dB and will work in direct relation-
ship with a gain reduction meter to inform you of how much the compressor
is affecting the incoming signal. In a typical recording situation this control is
set so that the average signal level always lies just below the threshold, and if
any exuberant parts exceed it, the compressor will jump into action and the
gain will be reduced to prevent any clipping.
Ratio
The amount of gain reduction that takes place after a sound exceeds the thresh-
old is set using a ratio control. Expressed in ratios, this control is used to set
the dynamic range the compressor affects, indicating the difference between
the signals entering the compressor that exceed the threshold to the levels that
come out of the other end.
For example, if the ratio is 4:1, every time the incoming signal exceeds the
threshold by 4 dB, the compressor will squash the signal so that there is only
a 1 dB increase at the output of the compressor. Similarly, if the ratio set is 6:1,
an increase at the compressor’s output of 1 dB will occur when the threshold is
exceeded by 6 dB and likewise for ratios of 8:1, 10:1 and so on. Subsequently,
the gain reduction ratio always remains constant no matter how much com-
pression takes place. In most compressors these range from 1:1 up to 10:1 and
may, in some cases, also offer infi nity:1.
From this we can determine that if a sound exceeds a predefi ned threshold, the
compressor will squash the signal by the amount set with the ratio control. The
problem with this approach, however, is that we gain a signifi cant amount of
information about sounds from their initial attack stage, and if the compressor
jumps in instantaneously on an exceeded signal, it will squash the transients
which reduces its high frequency (HF) content.
For instance, if you are to set up a compressor to squash a snare drum, the
compressor will clamp down on the attack stage which in effect diminishes the
initial transients reducing it to a ‘thunk’. What’s more, this instantaneous action
will also appear when the sound drops below the threshold again as the com-
pressor stops processing the audio. This can be especially evident when com-
pressing low-frequency (LF) waveforms such as basses since compressors can
apply gain changes during a waveform’s period.
In other words, if a low-frequency waveform, such as a sustained bass note,
is squashed the compressor may treat the positive and negative states of
Compression, Processing and Effects
CHAPTER 2 35
the waveform as different signals and continually activate and deactivate. The
result of this is an unpleasant distortion of the waveform. To prevent this from
occurring, compressors will feature attack and release parameters.
Attack/Release
Both these parameters behave in a manner similar to those on a synthesizer
but control how quickly the volume is pulled down and how long it takes to
rise back to its nominal level after the signal has fallen below the threshold.
In other words, the attack parameter defi nes how long the compressor takes to
reach maximum gain reduction while the release parameter determines how
long the compressor will wait after the signal has dropped below the threshold
before processing stops.
This raises the obvious question that if the attack is set so that it doesn’t clamp
down on the initial attack of the source sound, it could introduce distortion/
clipping before the compressor activates. While this is true, in practice very
short, sharp signals do not always overload an analogue recorder since these
usually have enough headroom to let small transients through without intro-
ducing any unwanted artefacts. This isn’t the case with digital recorders, though,
and any signals that are beyond the limit of digital can result in clipping,
so it’s quite usual to follow a compressor with a limiter or, if the compressor
features a knee mode, set it to use a soft knee.
Soft/Hard Knee
All compressors will utilize either soft or hard knee compression but some will
offer the option to switch between the two modes. These are not controllable
parameters but dictate the shape of the envelope ’s curve, and hence the charac-
teristic of how the compressor behaves when a signal approaches the threshold.
So far we’ve considered that when a signal exceeds the threshold the compressor
will begin to squash the signal. This immediate action is referred to as hard knee
compression. Soft knee, on the other hand, continually measures the incom-
ing signal, and when it approaches 3 –14 dB (dependant on the compressor)
towards the current threshold, the compressor starts to apply the gain reduction
gradually.
Generally this will initially start with a ratio of 1:1, and as the signal grows ever
closer to the threshold, it’s gradually increased until the threshold is exceeded,
whereby full gain reduction is applied. This allows the compressor’s action to
be less evident and is particularly suitable for use on acoustic guitars and wind
instruments where you don’t necessarily want the action to be evident.
It should be noted that the action of the knee is entirely dependent on the
compressor being used and some can be particularly long starting 12 dB
before the threshold while others may start 3 dB before. As a matter of interest,
6–9 dB soft knees are considered to offer the most natural compression for
instruments.
PART 1
Technology and Theory
36
Peak/RMS
Not all compressors feature knees, so short transient peaks can sometimes
catch the compressor unaware and ‘sneak’ past unaffected. This is obviously
going to cause problems when recording digitally, so many compressors will
implement a switch for Peak or RMS modes. Compressors that do not fea-
ture these two modes will operate in RMS, which means that the compres-
sor will detect and control signals that stay at an average level rather than the
short sharp transient peaks. As a result, no matter how fast the attack may be
set, there’s a chance that the transients will overshoot the threshold and not
be controlled. This is because by the time the compressor has fi gured out that
the sound has exceeded the threshold it’s too late – the peak’s been and gone
again. Therefore to control short transient sound such as drum loops, it’s often
prudent to engage the peak mode. With this the compressor becomes sensitive
to short sharp peaks and clamps down on them as soon as they come close
to the threshold, rather than after they exceed it. By doing so, the peak can be
controlled before it overshoots the threshold and creates a problem.
While this can be particularly useful when working with drum and percussion
sounds it can create havoc with most other timbres. Keep in mind that many
instruments can exhibit a particularly short, sharp initial attack stage, and if
the compressor perceives these as possible problems, it’ll jump down on them
before they overshoot. In doing so, the high-frequency elements of the attack
will be dulled which can make the instrument appear less defi ned, muddled or
lost within the mix. Therefore, for all instruments bar drums and percussion,
it’s advisable to stick with the RMS mode.
Make-Up Gain
The fi nal control on a compressor is the make-up gain. If you’ve set the thresh-
old, ratio, attack and release correctly, the compressor should compress effectively
and reduce the dynamics in a sound but this compression will also reduce the
overall gain by the amount set by the ratio control. Therefore, whenever com-
pression takes place you can use the make-up gain to bring the signal back up
to its pre-compressed volume level.
Side Chaining
Alongside the physical input and output connections on hardware compres-
sors, many also feature an additional pair of inputs known as side chains. By
inputting an audio signal into these, a sound’s envelope can be used to control
the action that the compressor has on the signal entering the normal inputs.
A good example of this is when a radio DJ begins talking over a record and the
volume of the record lowers so that their voice becomes audible, then when
they stop speaking the record returns to its original volume. This is accom-
plished by feeding the music through the compressor as normal but with the
microphone connected into the side chain. This supersedes the compressor’s
normal oper ation and uses the signal from the side chain rather than the
Compression, Processing and Effects
CHAPTER 2 37
threshold as the trigger. Thus, the compressor is triggered when the micro-
phone is spoken into, compressing (in effect lowering the volume of the
music) by the amount set with the ratio control. This technique should only
be viewed as an example to explain the process, though, and more commonly
side chaining is usually used to make space in a mix for the vocals.
In a typical mix, the lead sound will occupy the same frequencies as the human
voice, resulting in a cluttered mid-range if the two are to play together. This can
be avoided if the lead mix is fed into the main inputs of the compressor while
the vocal track is routed into the side chain. With the ratio set at an appro-
priate level (dependent on the tonal characteristics of the lead and vocals) the
lead track will dip when the vocals are present, allowing them to pull through
the mix.
Hold Control
Most compressors that feature a side chain are likely to also have an associ-
ated ‘hold’ control on the facia or employ an automated hold function. This is
employed because a side chain measures the envelope of the incoming signal,
and if both the release and attack are too fast, the compressor may respond
to the cycles of a low-frequency waveform rather than the actual envelope. As
touched upon previously, this can result in the peaks and dips of the wave-
form, activating and deactivating the compressor resulting in distortion. By
using a hold the compressor is forced to wait a fi nite amount of time (usually
40–60 ms on automated hold) before beginning the release phase, which is
longer than the period of a low-frequency waveform.
STANDARD COMPRESSION
Despite the amount of control offered by the average compressor, they are rela-
tively simple to set up for recording audio. As a generalized starting point, it’s
advisable to set the ratio at 4:1 and lower the threshold so that the gain reduc-
tion meter reads between –8 and –10 dB on the loudest parts of the signal. After
this, the attack parameter should be set to the fastest speed possible and the
release set to approximately 500 ms. Using these as preliminary settings, they
can then adjusted further to suit any particular sound.
It’s advisable that compression is applied sparingly during the recording stage
because once applied it cannot be removed. Any exuberant parts of the per-
formance should be prevented from forcing the recorder’s meters into the red
while also ensuring that the compressor is as transparent as possible. Solid-state
compressors are more transparent than their valve counterparts and so are better
suited for this purpose.
As a general rule of thumb, the higher the dynamic range of the instrument
being recorded the higher the ratio and the lower the threshold settings need
to be. These settings help to keep the varying dynamics under tighter control
PART 1
Technology and Theory
38
and prevent too much fl uctuation throughout the performance. Additionally,
if the choice between hard or soft knee is available, the structure of the timbre
should be taken into account. To retain a sharp, bright attack stage, hard knee
compression with an attack setting that allows the initial transient to sneak
through unmolested should be used, provided of course that the transient is
unlikely to bypass the compression. In these instances, and to capture a more
natural sound, soft knee compression should be used.
Finally, the release period should be set as short as possible but not so short
that the effect is noticeable when the compressor stops processing. After setting
the release at 500 ms, the time should be continually reduced until the process-
ing is noticeable and then increased slowly until it isn’t.
Some compressors feature an automode for the release that uses a fast release
on transient hits and a slower time for smaller peaks, making this task easier.
The settings shown in Table 2.1 are naturally only starting points and too much
compression should be avoided during the recording stage, something that can
only be accomplished by setting both the ratio and the threshold control care-
fully. This involves setting the compressor to squash audio but ensuring that it
stops processing and that the gain reduction meter drops to 0 dB (i.e. no signal
is being compressed) during any silent passages.
As a more practical example, with a simple four to the fl oor kick running
through the compressor and the ratio and threshold controls set so that the
gain reduction reads –8 dB on each kick, it’s necessary to ensure that the gain
reduction meter returns to 0 dB during any silent periods. If it doesn’t, then the
loop is being overcompressed. If the gain reduction only drops to –2 dB during
Compression Settings
Compression
Settings Ratio
Attack
Parameter
(ms)
Release
Parameter
(ms)
Gain
Reduction
(dB) Knee
Starting settings 5–10:1 1–10 40–100 5 to –15 Hard
Drum loop 5–10:1 1–10 40–100 5 to –15 Hard
Bass 4–12:1 1–10 20 or auto 6 to –13 Hard
Leads 2–8:1 3–10 40 or auto 8 to –10 Hard
Vocals 2–7:1 1–7 50 or auto 3 to –10 Soft
Brass instruments 4–10:1 1–7 30 or auto 8 to –13 Hard
Electric guitars 8–10:1 2–7 50 or auto 5 to –12 Hard
Acoustic guitars 5–9:1 5–20 40 or auto 5 to –12 Hard
Table 2.1
Compression, Processing and Effects
CHAPTER 2 39
FIGURE 2.3
The waveform of the
drum pattern
the silence between kicks, then it makes sense that only 6 dB of gain reduction
is actually being applied. This means that every time the compressor activates
it has to jump from 0 to 8 dB, when in reality it only needs to jump in by 6 dB.
This additional 2 dB of gain will distort the transient that follows the silence,
making it necessary for the gain reduction to be adjusted accordingly.
PRACTICAL COMPRESSION
While it is generally worth avoiding any evident compression during the
recording stage, deliberately making the compressor’s action evident forms
a fundamental part of creating the typical sound of dance music. To better
describe this, we’ll use a drum loop to experiment upon. This is a typical dance
drum loop consisting of a kick, snare, closed and open hi-hats ( Figure 2.3 ).
The data CD contains the drum loop.
It’s clear from looking/listening to the drum loop that the greatest energy
that is the loudest part of the loop – is derived from the kick drum. With this
in mind, if a compressor is inserted across this particular drum loop and the
threshold is set just below the peak level of the loudest part, each consecutive
kick will activate the compressor.
If you have access to a wave editor, open the fi le in the wave editor and open
a compressor plug-in. If you work in hardware, set up the compressor across
PART 1
Technology and Theory
40
the drum loop as an insert (if you don’t understand insert effects yet, it is dis-
cussed in the next chapter; go there and return here when you understand the
principles).
Now, set the threshold just below the peak level of the kick drum. You can do
this by watching the gain reduction meter and ensuring it moves every time
a kick occurs. Set the ratio to 4:1, the attack time fast and then while play-
ing back the loop, experiment with the release time. Note that as the release
is shortened, the loop begins to pump more dramatically. This is a result of
the compressor activating on the kick, then quickly releasing when the kick
drops below the threshold. The result is a rapid change in volume, producing a
pumping effect as the compressor activates and deactivates on each kick.
The data CD contains the drum loop compressed.
This is known as ‘gain pumping ’ and, although frowned upon in some areas
of music, is deliberately used in dance and popular music to give a track a
more dynamic feel. The exact timing of the release will depend entirely on
the tempo of the drum loop and must be short enough for the compressor to
recover before the next kick. Similarly, the release must be long enough for this
effect to sound natural, so it’s best to keep the loop repeated over four bars and
experiment with the attack and release parameters until the required sound is
achieved.
If we expand on this principle further and add a bass line, pad and chords to
the previously compressed loop and compress it again in the same way (i.e.
with a short release), the entire mix will pump energetically. As the kick is still
controlling the compressor (and provided that the release isn’t too short or
long), every time a kick occurs the rest of the instruments will drop in volume,
which accentuates the overall rhythm of the piece.
Gain pumping is also useful when applied across the whole mix, even though
each element in the mix may already have been compressed in isolation. Gain
pumping across the whole mix is used to balance areas in the track where
instruments are dropped in and out. When fewer instruments play, the gain
of the mix will be perceived as lower than when all the instruments play
simultaneously. The overall level can be controlled by strapping a compressor
across the mixing desk’s main stereo bus (more on this in later chapters), to
make the mix pump with energy.
Gain pumping across the entire mix ( ‘mix pumping ’) should be applied with
caution because if the mix is pumped too heavily it will sound strange. Setting
a 20 –30 ms attack with a 250 ms release and a low threshold and ratio to
reduce the range by 2 dB or so should be suffi cient to produce a mix that has
the ‘right’ feel.
Compression, Processing and Effects
CHAPTER 2 41
That said, there is a trend emerging where gain pumping is becoming an actual
part of the music, such as Eric Prydz’s Valerie, but this is applied in a differ-
ent manner, using the compressor’s side chain. To accomplish this effect, you
require a mix and an individual kick loop that is in tempo with the mix.
The data CD contains parts required for this example.
If you don’t have any available, track 3 of the CD contains these parts. Place
the mix into one channel of a sequencer and drop the kick drum onto a sec-
ond channel. Set up a compressor on the kick drum channel, use the kick
as a side chain and feed the mix into the main compressor’s inputs. Set the
ratio to 4:1, with a fast attack and release, and if the compressor features it,
set it to Peak (or turn RMS off). Begin playback of both channels and slowly
reduce the threshold; the entire mix will pump with every kick. This can be
used more creatively to create a gated pad effect.
The data CD contains an example of gain pumping.
This technique can also be used in hip-hop, rap, house and big beat to help cre-
ate room in the mix. Since these often have particularly loud bass elements that
play consecutively with the kick, the different sounds can confl ict, muddying the
bottom end of the mix. This can be prevented by feeding the kick drum separately
into the side-chain inputs of the compressor, with the ratio set to 3:1, with a fast
attack and medium release (depending on the sound). If the bass is fed into the
compressor’s main inputs, every time the kick occurs the bass will drop in vol-
ume, making space for the kick thereby preventing any confl icts.
Compression can also be used on individual sounds to change the tonal con-
tent of a sound. For example, by using heavy compression (a low threshold
and high ratio) on an isolated snare drum, the snare’s attack avoids compres-
sion but the decay is squashed which brings it up to the attack’s gain level.
This technique is often employed to create the snare thwack typical of trance,
techno and house music styles. Similarly, if a deeper, duller, speaker-mashing,
kick drum ‘thud’ is required the compressor’s attack should be set as short
as possible so that it clamps down hard on the initial attack of the kick. This
eradicates much of the initial high frequency content, and as the volume is
increased with the compressor’s make-up control, a deeper and much more
substantial ‘thud’ is produced.
It is, however, important to note that the overall quality of compression depends
entirely on the type of compressor being used. Compressors are one of two types:
solid-state or valve. Solid-state compressors use digital circuitry throughout and
PART 1
Technology and Theory
42
will not tend to pump as heavily or sound as good as those that utilize valve
technology. Some valve compressors will be solid state in the most part, using
a valve only at the compressor’s make-up gain. Solid-state compressors are usu-
ally more transparent than their valve-based counterparts and are used during
the recording stages. Valve compressors are typically used after recording to add
warmth to drums, vocals and basses, an effect caused by small amounts of
second-order harmonic distortion that are introduced into the fi nal gain
circuitry
1 of the compressor. This distortion is a result of the random movement
of electrons which, in the case of valves, occurs at exactly twice the frequency of
the amplifi ed signal. Despite the fact that this distortion only contributes 0.2%
to the amplifi ed signal, the human ear (subjectively!) fi nds it appealing.
More importantly, the characteristic warmth of a valve compressor differs
according to the model of valve compressor that is used, as each will exhibit
different characteristics. These differences and the variations from compressor
to compressor are the reasons why many dance producers will spend a fortune
on the right compressor and why it isn’t uncommon for producers to own a
number of both valve and solid-state types.
Most dance producers agree that solid-state circuitry tends to react faster, pro-
ducing a more defi ned, less forgiving sound, while valve compressors add
warmth that improves the overall timbre.
While it isn’t essential to know why these differences exist from model to
model, it’s worth knowing which compressor is most suited to a particular style
of work. Failing that, it also makes for excellent conversation (if you’re that way
inclined), so what follows is a quick rundown of the fi ve most popular methods
of compression:
Variable MU
Field effect transistor (FET)
Optical
VCA
Computer-based digital
Variable MU
The fi rst compressors ever to appear on the market were called variable MU
units. This type of compressor uses valves for the gain control circuitry and does
not have an adjustable ratio control. Instead of an adjustable control, the ratio
is increased in proportion to the amount of the incoming signal that exceeds
the threshold. In other words, the more the level overshoots the threshold the
more the ratio increases. While these compressors do offer attack and release
stages, they’re not particularly suited towards material with fast transients, even
1 John Ambrose Fleming originally developed the valve in 1904 but it was 2 years later
that Lee De Forest constructed the fi rst Triode confi guration. Edwin Howard Armstrong then
used this to create the fi rst ever valve amplifi er in 1912.
Compression, Processing and Effects
CHAPTER 2 43
with their fastest attack settings. Due to the valve design, the valves run out of
dynamic range relatively quickly, so it’s unusual to acquire more than 15 –20 dB
of gain reduction before the compressor runs out of energy. Nevertheless, vari-
able MU compressors are renowned for their distinctive, phat, lush character,
and can work magic on basses and pumping dance mixes. The most notorious
variable MU compressors are made by Manley and can cost in excess of £3500.
FET
FET compressors use a fi eld effect transistor to vary the gain. These were the fi rst
transistors to emulate the action of valves. They provide incredibly fast attack
and release stages making them an excellent choice for beefi ng up kick and snare
drums, electric guitars, vocals and synthesizer leads. While they suffer from a lim-
ited dynamic range, if they’re pushed hard they can pump very musically and are
perfectly suited for gain pumping a mix. The only major problem is getting your
hands on one. Original FETs are as rare as rocking horse manure, and consequently
second-hand models are incredibly expensive. Reproduction versions of the early
FETs, such as the UREI 1176LN Peak Limiter (approximately £1800) and the LA
Audio Classic II (approximately £2000), are a worthwhile alternative.
Optical
Optical (or ‘opto’) compressors use a light bulb that reacts to the incoming audio
by glowing brighter or dimmer depending on the incoming sound (seriously!).
A phototransistor tracks the level of illumination from the bulb and changes the
gain. Because the phototransistor must monitor the light bulb before it takes any
action, some latency is created in the compressor’s response, so the more heavily
the compression is applied the longer the envelope times tend to be. Consequently,
most optical compressors utilize soft knee compression. This creates a more natu-
ral attack and release but also means that the compressor is not quick enough to
catch many transients. Despite this, optical compressors are great for compressing
vocals, basses, electric guitars and drum loops, providing that a limiter follows the
compression. (Limiters will be explained later.)
There are plenty of opto compressors to choose from, including the ADL 1500
(approximately £2500), the UREI LA3 and UREI Teletronix LA-2A (approxi-
mately £2900 each), the Joe Meek C2 (approximately £250) and the Joe Meek
SC2.2 (approximately £500). Both Joe Meek units sound particularly smooth
and warm considering their relatively low prices, and for the typical gain pump
synonymous with dance then you could do worse than to pick up the SC2.2.
Notably, all Joe Meek’s compressors are green because after designing his fi rst
unit he decided to spruce it up by colouring it with car aerosol paint and green
was the only colour he could fi nd in the garage at the time.
VCA
VCA compressors offer the fastest envelope times and highest gain reduction
levels of any of the compressors covered so far. These are the compressors most
PART 1
Technology and Theory
44
likely to be found in a typical home studio. As with most things, the quality of a
VCA compressor varies wildly in relation to its price tag. Many of the models
aimed at the more budget conscientious musician reduce the high frequencies
when a high gain reduction is used, regardless of whether you’re clamping
down on the transients or not. When used on a full mix these also rarely pro-
duce the pumping energy that is typical of models that are more expensive.
Nevertheless, these types of compressor are suitable for use on any sound. The
most widely celebrated VCA compressor is the Empirical Labs Stereo Distressor
(approximately £2500), which is a digitally controlled analogue compressor
with VCA, solid-state and op amps. This allows switching between the different
methods of compression to suit the sound. Two versions of the Distressor are
available to date: the standard version and the British version. Of the two, the
British version produces a much more natural, warm tone (I’m not just being
patriotic) and is the preferred choice of many dance musicians.
Computer-Based Digital
Computer-based digital compressors are possibly the most precise compres-
sors to use on a sound. Because these compressors are based in the software
domain, they can analyse the incoming audio before it actually reaches the
compressor, allowing them to predict and apply compression without the risk
of any transients sneaking past the compressor. This means that they do not
need to utilize a peak/RMS operation. These digital compressors can emulate
both solid-state, transparent compression and the more obvious, warm, valve
compression at the fraction of the price of a hardware unit. In fact, the Waves
RComp can be switched to emulate an optical compressor. Similarly, the PSP
Vintage Warmer and Sonalksis TBK3 can add an incredible amount of valve
warmth.
The look-ahead functions employed in computer-based compressors can be
emulated in hardware with some creative thought, which can be especially use-
ful if the compressor has no peak function. Using a kick drum as an example,
make a copy of the kick drum track and then delay it in relation to the origi-
nal by 50 ms. By then feeding the delayed drum track into the compressor’s
main inputs and the original drum track into the compressor’s side chain, the
original drum track activates the compressor just before the delayed version
goes through the main inputs, in effect creating a look-ahead compressor!
Ultimately, it is advisable not to get too carried away when compressing audio
as it can be easy to destroy the sound while still believing that it sounds better.
This is because louder sounds are invariably perceived as sounding better than
those that are quieter. If the make-up gain on the compressor is set at a higher
level than the inputted signal, even if the compressor was set up by your pet
cat, it will still sound better than the non-compressed version. The incoming
signal must be set at exactly the same level as the output of the compressor so
that when bypassing the compressor to check the results, the difference in vol-
ume doesn’t persuade you that it sounds better.
Compression, Processing and Effects
CHAPTER 2 45
Furthermore, while any sounds that are above the threshold will be reduced
in gain, those below it will be increased when the make-up gain is turned up.
While this has the advantage of boosting the average signal level, a compres-
sor does not differentiate between music and unwanted noise. So 15 dB of
gain reduction will reduce the peak level to 15 dB while the sounds below this
remain the same. Using the make-up gain to bring this back up to its nom-
inal level (i.e. 15 dB) any signals that were below the threshold will also be
increased by 15 dB, and if there is noise present in the recording, it may
become more noticeable.
Most important of all, dance music relies heavily on the energy of the overall
punch’ produced by the kick drum, which comes from the kick drum phys-
ically moving the loudspeaker’s cone in and out. The more the cone is phys-
ically moved, the greater the punch of the kick. This degree of movement is
directly related to the size of the kick’s peak in relation to the rest of the music’s
waveform. If the difference between the peak of the kick and the main body of
the music is reduced too much through heavy compression, it may increase
the average signal level but the kick will not have as much energy since the
dynamic range is restricted, meaning that all the music will move the cone by
the same amount. So, you should be cautious as to how much you compress
otherwise you may lose the excursion which results in a loud yet fl at and unex-
citing track with no energetic punch from the kick ( Figures 2.4 and 2.5 ).
LIMITERS
After compression is applied, it’s common practice to pass the audio through
a limiter, just in case any transient is not captured by the compressor. Limiters
FIGURE 2.4
A mix with excursion
PART 1
Technology and Theory
46
FIGURE 2.5
A mix with no excursion
(all the contents of the
mix are almost at equal
volume)
work along similar principles to compressors but rather than compress a sig-
nal by a ratio, they stop signals from ever exceeding the threshold in the fi rst
place. This means that no matter how loud the inputted signal becomes, it will
be squashed down so that it never violates the current threshold setting. This
is referred to as ‘brick wall ’ because no sounds can ever exceed the threshold.
Some limiters, however, allow a slight increase in level above the threshold in
an effort to maintain a more natural sound.
A widespread misconception is that if the compressor offers a ratio above 10:1
and is set to this it will act as a limiter but this isn’t necessarily always the case.
As we’ve seen, a compressor is designed to detect an average signal level (RMS)
rather than a peak signal, so even if the attack is set to its fastest response, there’s
a good chance that signal peaks will catch the compressor unaware. The cir-
cuitry within limiters, however, does not employ an attack control, and as soon
as the signal reaches the threshold, it is brought under control instantaneously.
Therefore, if recording a signal that contains plenty of peaks, a limiter placed
directly after the compressor will clamp down on any signals that creep past the
compressor and prevent clipping.
Most limiters are quite simple to use and only feature three controls: an input level,
a threshold and an output gain, but some may also feature a release parameter. The
input is used to set the overall signal level entering the limiter while the threshold
and output gain, like a compressor, are used to set the level where the limiter begins
attenuating the signal and controlling the output level. The release control is not
standard on all limiters, but if included, it ’s straightforward and allows the time it
takes the limiter to return to its nominal state after limiting to be set. As with com-
pression, however, this must be set cautiously, giving the limiter time to recover
before the next signal is received to avoid distorting the subsequent transients.
Compression, Processing and Effects
CHAPTER 2 47
The main purpose of a limiter is to prevent transient signals from overshooting
the threshold. Although there is no need for an additional attack control, some
software plug-ins will make use of one. This is because they employ look-ahead
algorithms that constantly analyse the incoming signal. This allows the limiter to
begin the attack stage just before the peak signal occurs. In most cases, this attack
isn’t user defi nable and a soft or hard setting will be provided instead. Similar to
the knee setting on a compressor, a hard attack activates the limiter as soon as a
peak is close to overshooting. On the other hand, a soft attack has a smoother
curve with a 10 or 20 ms timing. This reduces the likelihood that any artefacts
are introduced into the processed audio by jumping in on the audio too quickly.
These software look-ahead limiters are sometimes referred to as ultramaximizers.
As discussed, the types of signals that require limiting are commonly those
with an initial sharp transient peak. As a result, limiters are generally used for
removing the ‘crack’ from snare drums, keeping the kick drum under control,
and are often used on a full track to produce a louder mix during the mastering
process. Like compressors, though, limiters must be used cautiously because
they work on the principle of reducing the dynamic range. That is, the harder
a sound is limited, the more dynamically restricted it becomes. Too much limit-
ing can result in a loud but monotonous sounding signal or mix. On average,
approximately 3 –6 dB is a reasonable amount of limiting, but the exact fi gure
depends entirely on the sound or mix. If the sound has already been quite
heavily compressed, it’s best to avoid boosting any more than 3 dB at the limit-
ing stage, otherwise any dynamics deliberately left in during the compression
stage may be destroyed ( Figures 2.6 and 2.7 ).
FIGURE 2.6
Drum loop before
limiting
PART 1
Technology and Theory
48
NOISE GATES
Noise gates can be described as the opposite of compressors. This is because
while a compressor attenuates the level of any signal that exceeds the thresh-
old, a gate can attenuate or remove any signals that are below the threshold.
The main purpose of this is to remove any low-level noise that may be present
during a silent passage. For instance, a typical effect of many commercial dance
tracks is to introduce absolute silence or perhaps a drum kick just before the
reprise so that when the track returns fully, the sudden change from almost
nothing into everything playing at once creates a massive impact. The prob-
lem with this approach, though, is that if there is some low-level noise in the
recording it will be evident when the track falls silent (i.e. noise between the
kicks), which not only sounds cheap but reduces the impact when the rest of
the instruments jump back in. In these instances, by employing a gate it can be
set so that whenever sounds fall below its threshold the gate activates and cre-
ates absolute silence. While in theory this sounds simple enough, in practice
it’s all a little more diffi cult.
Firstly, we need to consider that not all sounds stay at a constant volume
throughout their period. Indeed, some sounds can fl uctuate wildly in volume,
which means that they may constantly jump above and below the threshold of
the gate. What’s more, if the sound was close to the gates threshold throughout,
with even a slight fl uctuation in volume it’ll constantly leap above and below the
threshold resulting in an effect known as chattering. To prevent this, gates will
often feature an automated or user-defi nable hold time. Using this, the gate can
be forced to wait for a predetermined amount of time after the signal has fallen
below the threshold before it begins its release stage, thus avoiding the problem.
FIGURE 2.7
Drum loop after limiting
Compression, Processing and Effects
CHAPTER 2 49
The action of this hold function is sometimes confused with a similar gate pro-
cess called hysteresis but the two processes, while accomplishing the same goal,
are very different. Whereas the hold function forces the gate to wait for a pre-
defi ned amount of time before closing, hysteresis adjusts the threshold ’s tolerance
independently for opening and closing the gate. For example, if the threshold was
set at, say, 12 dB, the audio signal must breach this before the gate opens but
the signal must fall a few extra dB below 12 dB before the gate closes again.
Consequently, while both hold and hysteresis accomplish the same goal in pre-
venting any chatter, it is generally accepted that hysteresis sounds much more
natural than simply using a hold control.
A second problem develops when we consider that not all sounds will start and
stop abruptly. For instance, if you were gating a pad that gradually rose in vol-
ume, it would only be allowed through the gate after it exceeds the predefi ned
threshold. If this threshold happened to be set quite high, the pad would sud-
denly jump in rather than fade in gradually as it was supposed to. Similarly,
rather than fade away, it would be abruptly cut off as it fell below the thresh-
old again. Of course, you could always lower the threshold, but that may allow
noise to creep in, so gates will also feature attack and release parameters. These
are similar in most respects to a compressor’s envelope in that they allow you
to determine the attack and release times of the gate ’s action. Using these on
our example of a pad, by setting the release quite long as soon as the pad falls
below the threshold the gate will enter the release stage and gradually fade out
rather than cut them off abruptly. Likewise, by lengthening the attack on the
gate, the strings will fade in rather than jump in unexpectedly.
The third, and fi nal, problem is that we may not always want to silence any
sounds that fall below the threshold. Suppose that you’ve recorded a rapper (or
any vocalist for that matter) to drop into the music. He or she will obviously
need to breathe between the verses, and if they’re about to scream something
out, they’ll need a large intake of breath before starting. This sharp intake of
breath will make its way onto the vocal recording, and while you don’t want it
to be too loud, at the same time you don’t want it totally removed either other-
wise it’ll sound totally unnatural – the audience instinctively know that vocalists
have to breathe!
Consequently, we need a way of lowering the volume of any sounds that fall
below the threshold rather than totally attenuating them, so many gates (but
not all!) will feature a range control. Fundamentally, this is a volume control
that’s calibrated in decibels allowing you to defi ne how much the signal is atten-
uated when it falls below the threshold. The more this is increased, the more the
signal will be reduced in gain until – set at its maximum setting – the gate will
silence the signal altogether. Using this range control on the imaginary rapper,
you could set it quite low so that the volume of the breaths is not too loud but
not too quiet either. By setting the threshold so that only the vocals breach it
and those below are reduced in volume by a small amount, it will sound much
more natural. Furthermore, by setting the gate ’s attack to around 100 ms or so
PART 1
Technology and Theory
50
as he/she breathes, it will begin at the volume set by the range and then slowly
swell in to the vocal, which produces a much more natural effect.
For this application to work properly, the release time of the gate must be set
cautiously. If it’s set too long, the gate may remain open during the silence
between the vocals, which doesn’t allow a new attack stage to be triggered
when they begin to sing again. On the other hand, if it’s set too short it can
result in the ‘chattering’ effect described earlier. Consequently, it’s prudent to
use the shortest possible decay time possible, yet long enough to provide a
smooth sound. Generally, this is usually somewhere between 50 and 200 ms.
Employed creatively, the range control can also be used to modify the attack
transients of percussive instruments such as pianos, organs or lead sounds (not
on drums, though, these are far too short).
One fi nal aspect of gates is that many will also feature a side-chain connection. In
this context they’re often referred to as ‘key’ inputs but nevertheless this connec-
tion behaves in a manner similar to a compressor’s side chain. Fundamentally,
they allow you to insert an audio signal in the key input which can then be used
to control the action of the gate, which in turn affects the audio travelling through
the noise gates normal inputs. This obviously has numerous creative uses but the
most common use is to programme a kick drum rhythm and feed the audio into
the key input. Any signals that are then fed through the gate s normal inputs will
be gated every time a kick occurs. This action supersedes the gate ’s threshold set-
ting but the attack, release, range, hold or hysteresis controls are often still avail-
able allowing you to contour the reaction of the gate on the audio signal.
Another use of this key input is known as ‘ducking’ and many gates will feature
a push button allowing you to engage it. When this is activated, the gate ’s pro-
cess is reversed so that any signals that enter the key input will ‘duck’ the vol-
ume of the signal running through the gate. A typical use for this is to connect
a microphone into the key input so that every time you speak, the volume of
the original signal travelling through the gate is ducked in volume. Again, this
supersedes the threshold control, but all of the other parameters are still avail-
able allowing you to contour the action of the signal being ducked, although it
should be noted that the attack time turns into a release control and vice versa.
Also as a side note, some of the more expensive gates will feature a MIDI in port
that prevents you from inserting an audio signal into the key input; instead,
MIDI note on messages can be used to control the action of the gate.
TRANSIENT DESIGNERS
Transient designers are quite simple processors that generally feature only two
controls: an attack and a sustain parameter, both of which allow you to shape
the dynamic envelope of a sound. Fundamentally, this means that you can
alter the attack and sustain characteristics of a pre-recorded audio fi le the same
as you would when using a synthesizer. While this may initially not seem to be
too impressive, it has a multitude of practical uses.
Compression, Processing and Effects
CHAPTER 2 51
Since we determine a signifi cant amount of information about a sound through
its attack stage, modifying this can change the appearance of any sound. For
example, if you have a sampled loop and the drum is too loud, by reducing its
attack (and lengthening the sustain so that the sound doesn’t vanish) it will
be moved further back into the mix. In a more creative application, if a groove
has been sampled from a record it allows you to modify the drum sounds into
something else.
Similarly if you’ve sampled or recorded vocals, pianos, strings or any instru-
ment for that matter, the transient designer can be used to add or remove some
of the attack stage to make the sound more or less prominent while strings and
basses could have a longer sustain applied. Similarly, by reducing the sustain
parameter on the transient designer you could reduce the length of the notes.
Notably, noise gates can also be used to create the effect of a transient designer
and, like the previously discussed processors, these can be an invaluable tool
to a dance producer; we’ll be looking more closely at the uses of both in the
genre chapters.
REVERB
Reverberation (often shortened to reverb or just verb) is used to describe the
natural refl ections we’ve come to expect from listening to sounds in different
environments. We already know that when something produces a sound, the
resulting changes in air pressure emanate out in all directions but only a propor-
tion of this reaches our ears directly. The rest rebounds off nearby objects and
walls before reaching our ears; thus, it makes common sense that these refl ected
waves would take longer to reach your ears than the direct sound itself.
This creates a series of discrete echoes that are all closely compacted together
and from this our brains can decipher a staggering amount of information
about the surroundings. This is because each time the sound is refl ected from
a surface, that surface will absorb some of the sound’s energy, thereby reducing
the amplitude. However, each surface also has a distinct frequency response,
which means that different materials will absorb the sound’s energy at different
frequencies. For instance, stone walls will rebound high-frequency energy more
readily than soft furnishings which absorb it. If you were in a large hall it would
take longer for the reverberations to decay away than it would if you were in a
smaller room. In fact, the further away from a sound source you are, the more
reverberation there would be in comparison to the direct sound in refl ective
spaces, until eventually, if the sound was far enough away and the conditions
were right you would hear a series of distinct echoes rather than reverb.
There should be little need to describe all the differing effects of reverb because
you’ll have experienced them all yourself. If you were blindfolded, you would
still be able to determine what type of room you’re in from the sonic refl ec-
tions. In fact, reverb is such a natural occurrence that if it’s totally removed
(such as is an anechoic chamber) it can be unsettling almost to the point of
PART 1
Technology and Theory
52
nausea. Our eyes are informing the brain of the room’s dimensions, but the
ears are informing it of something completely different.
Ultimately, while compression is the most important processor, reverb is the
most important effect because samplers and synthesizers do not generate natural
reverberations until the resulting signals are exposed to air. So, in order to cre-
ate some depth in a mix you often need to add it artifi cially. For example, the
kick may need to be at the front of a mix but any pads could sit in the back-
ground. Simply reducing the volume of the pads may make them disappear
into the mix, but by applying a light smear of reverb you could fool the brain
into believing that the sound is further away from the drums because of the
reverberation that’s surrounding it.
However, there’s much more to applying reverb than simply increasing the
amount that is applied to the sound. As we’ve seen, reverb behaves very dif-
ferently depending on the furnishings and wall coverings, so all reverb units
will offer many more parameters and using it successfully depends on knowing
the effects all these will have on a sound. What follows is a list of the available
controls on a reverb unit, but it should be noted that in many cases all of these
will not be available – it depends on the quality of the unit itself.
Ratio (Sometimes Labelled as Mix)
The ratio controls the ratio of direct sound to the amount of reverberation
applied. If you increase the ratio to near maximum, there will be more reverb
than direct sound, while if you decrease it signifi cantly, there will be more
direct sound than reverb. Using this, you can make sounds appear further away
or closer to you.
Pre-Delay Time
After a sound occurs, the time separation between the direct sound and the fi rst
refl ection to reach your ears is referred to as the pre-delay. This parameter on a
reverb unit allows you to specify the amount of time between the start of the
unaffected sound and the beginning of the fi rst sonic refl ection. In a practical
sense, by using a long pre-delay setting the attack of the instrument can pull
through before the subsequent refl ections appear. This can be vital in prevent-
ing the refl ections from washing over the transient of instruments, forcing them
towards the back of a mix or muddying the sound.
Early Reflections
Early refl ections are used to control the sonic properties of the fi rst few refl ec-
tions we receive. Since sounds refl ect off a multitude of surfaces, subtle differ-
ences are created between subsequent refl ections reaching our ears. Due to the
complex nature of these fi rst refl ections, only the high-end processors feature
this type of control, which allows you to determine the type of surface the
sound has refl ected from.
Compression, Processing and Effects
CHAPTER 2 53
Diffusion
This parameter is associated with the early refl ections and is a measure of how
far the early refl ections are spread across the stereo image. The amount of stereo
width associated with the refl ections depends on how far the sound source is.
If a sound is far away then much of the stereo width of the reverb will dissipate
but there will be more reverberation than if it was upfront. If the sound source
is quite close, however, then the reverberations will tend to be less spread and
more monophonic. This is worth keeping in mind since many artists wash a
sound in stereo reverb to push it into the background and then wonder why
the stereo image disappears and doesn’t sound quite ‘right’ in context with the
rest of the mix.
Density
Directly after the early refl ections come the rest of the refl ections. On a reverb
unit this is referred to as the density. Using this control it’s possible to vary the
number of refl ections and how fast they should repeat. By increasing it, the
refl ections will become denser giving the impression that the surface they have
refl ected from is more complex.
Reverb Decay Time
This parameter is used to control the amount of time the reverb takes to decay
away. In large buildings the refl ections will generally take longer to decay into
silence than in a smaller room. Thus, by increasing the decay time you can
effectively increase the size of the ‘room’. This parameter must be used cau-
tiously, however, as if you use a large decay time on a motif the subsequent
refl ections from previous notes may still be decaying when the next note starts.
If the motif is continually repeated, it will be subjected to more and more
refl ections until it eventually turns into an incoherent mush of frequencies.
The amount of time it takes for a reverb to fade away (after the original sound
has stopped) is measured by how long it takes for the sound pressure level to
decay to one-millionth of its original value. Since one-millionth equates to a
60 dB reduction, reverb decay time is often referred to as RT60 time.
HF and LF Damping
The further refl ections have to travel the less high frequency content they will
have since the surrounding air will absorb them. Additionally, soft furnishings
will also absorb higher frequencies, so by reducing the high frequency content
(and reducing the decay time) you can give the impression that the sound is
in a small enclosed area or has soft furnishings. Alternatively, by increasing
the decay time and removing smaller amounts of the high frequency content
you can make the sound source appear further away. Further, by increasing the
lower frequency damping you can emulate a large open space. For instance,
while singing in a large cavernous area there will be a low end rumble with the
refl ections but not as much high-frequency energy.
PART 1
Technology and Theory
54
Despite the amount of controls a reverb unit may offer, it is also important
to note that units from different manufacturers will sound very different to
one another as each manufacturer will use different algorithms to simulate the
effect. Although it is quintessential to use a good reverb unit (such as a Lexicon
hardware unit or the TC Native Plug-ins included on the CD), it’s not uncom-
mon to use two or three different models of reverb in one mix.
CHORUS
Chorus effects attempt to emulate the sound of two or more of the same instru-
ments playing the same parts simultaneously. Since no two instrumentalists
could play exactly in time with one another, the result is a series of phase can-
cellations. This is analogous to two synthesizer waveforms slightly detuned and
playing simultaneously together; there will be a series of phase cancellations as
the two frequencies move in and out of phase with one another. A chorus unit
achieves this same effect by delaying the incoming audio signal slightly in time
while also dynamically changing the time delay and amplitude as the sound
continues.
To provide control over this modulation a typical chorus effect will offer three
parameters all of which can be directly related to the LFO parameters on a typical
synthesizer. The fi rst allows you to select a modulation waveform that will be
used to modulate the pitch of the delayed signal, while the second and third
parameters allow you to set the modulation rate (referred to as the frequency)
and the depth of the chorus effect (often referred to as delay). However, it
should be noted that because the modulation rate stays at a constant depth,
rate and waveform it doesn’t produce the ‘authentic’ results you would experi-
ence with real instrumentalists. Nevertheless, it has become a useful effect in
its own right and can often be employed to make oscillators and timbre appear
thicker, wider and much more substantial.
PHASERS AND FLANGERS
Phasers and fl angers are very similar effects with subtle differences in how they
are created, but work on a principle comparable to the chorus effect. Originally
phasing was produced by using two tape machines that played slightly out of
sync with one another. As you can probably imagine, this created an irregularity
between the two machines, which resulted in the phase relationship of the audio
being slightly different, in effect producing a hollow, phase-shifted sound.
This idea was developed further in the 1950s by Les Paul as he experimented by
applying pressure onto the ‘fl ange ’ (i.e. the metal circle that the tape is wound
upon) of the second tape machine. This effectively slowed down the speed of
the second machine and produced a more delayed swirling effect due not only
to the phase differences but also to the speed. With digital effect units, both
these work by mixing the original incoming signal with a delayed version but
also by feeding some of the output back into the input. The only difference
Compression, Processing and Effects
CHAPTER 2 55
between the two is that fl angers use a time delay circuit to produce the effect
while a phaser uses a phase shift circuit.
Nevertheless, both use an LFO to modulate either the phase shifting of the
phaser or the time delay of the fl anger. This creates a series of phase cancellations
since the original and delayed signals are out of phase with one another. The
resulting effect is that phasers produce a series of notches in the audio fi le that
are harmonically related (since they are related to the phase of the original
audio signal) while fl angers have a constantly different frequency because they
use a time delay circuit. Consequently both fl angers and phasers share the
same parameters. They both feature a rate parameter to control the speed of
the LFO effect along with a feedback control to set how deeply the LFO affects
the audio. Notably, some phasers will only use a sine wave as a modulation
source but most fl angers will allow you to not only change the shape but also
control the number of delays used to process the original signal. Today, both
these effects have become a staple in the production of dance, especially House,
with the likes of Daft Punk using them on just about every record they’ve ever
produced.
DIGITAL DELAY
To the dance music producer, digital delay (often referred to as digital delay
line – DDL) is one of the most important effects to own as if used creatively it
can be one of the most versatile. The simplest units will allow you to delay the
incoming audio signal by a predetermined time which is commonly referred to
in milliseconds or sometimes in note values. The number of delays produced
by the unit is often referred to as the feedback, so by increasing the feedback
setting you can produce more than one repeat from a single sound. This works
by sending some of the delayed output back into the effects input so that it’s
delayed again, and again, and again and so forth. Obviously this means that if
the feedback is set to a very high value the level of the repeats end up collecting
together rather than gradually dying away until eventually you’ll end up with a
horrible howling sound.
While all delay units will work on this basic premise, the more advanced units
may permit you to delay the left and right channels individually and pan them
to the left and right of the stereo image. They may also allow you to pitch
shift the subsequent delays, employ fi lters to adjust the harmonic content of
the delays, distort or add reverb to the results and apply LFO modulation to
the fi lter. Of all these additional controls (most of which should require no
explanation) the modulation is perhaps the most creative application to have
on a delay unit as it allows you to modulate the fi lters cut-off or pitch, or both,
with an LFO. The number and type of waveforms on offer vary from unit to
unit but fundamentally most will feature at least a sine, square and triangle
wave. Similar to a synthesizer’s LFO parameters, they will feature rate and depth
controls allowing you to adjust how fast and by how much it should modulate
the fi lter cut-off or pitch parameters.
PART 1
Technology and Theory
56
One of the most common uses for a delay in dance music is not necessarily to
add a series of delays to an audio signal but to create an effect known as gran-
ular delay. As we touched upon in Chapter 3, we cannot perceive individual
sounds if they are less than 30 ms apart, which is the principle behind granular
synthesis. However, if a sound is sent to a delay unit and the delay time is set
to less than 30 ms and combined with a low feedback setting, the subsequent
delays collect together in a short period of time which we cannot perceive as a
delay. The resulting effect is that the delayed timbre appears much bigger, wider
and upfront. This technique is often employed on leads or, in some cases,
basses if the track is based around a powerful driving bass line.
E Q
At its most basic, EQ is a frequency-specifi c volume control tone control that
allows you to intensify or attenuate specifi c frequencies. For this, three controls
are required
A frequency control allowing you to home in on the frequency you want
to adjust.
A ‘Q’ control allowing you to determine how many frequencies either
side of the centre frequency you want to adjust.
A gain control to allow you to attenuate or intensify the selected frequencies.
Notably not all EQ units will offer this amount of control and some units will
have a fi xed frequency or a fi xed Q, meaning that you can only adjust the vol-
ume of the frequencies that are preset by the manufacturer. EQ plays a much
larger role in mixing than it does in sound design, so this has only been a
quick introduction and we’ll look much more deeply into its effects when we
cover mixing and mastering in later chapters.
DISTORTION
The fi nal effect for this chapter, distortion, is pretty much self-explanatory; it
introduces an overdrive effect to any sounds that are fed through it. However,
while the basic premise is quite simple, it has many more uses than to simply
grunge up a clean audio signal. As touched upon in Chapter 3, a sine wave
does not contain any harmonics except for the fundamental frequency, and
therefore applying effects such as fl angers, phasers or fi lters will have very little
infl uence. However, if distortion were applied to the sine wave it would intro-
duce a series of harmonics into the signal giving the aforementioned effects
something more substantial to work with.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 57
57
57
There is much more to understanding the various processors and effects used
within the creation of dance music; you also need to know how to access them
through a typical mixing desk to gain the correct results. While most produc-
ers today rely on a computer to handle the recording, effecting, processing and
mixing, there will nevertheless come a time when you have to employ external
units into your rig, whether synthesizers, processors, effects or even a sampler,
and therefore you’ll start by looking at the cables used to connect these to your
mixing desk, laptop or computer.
Any competent studio is only as capable as its weakest link, so if low-quality
cables are used to connect devices together, the cables will be susceptible to
introducing interference, which results in noise. This problem arises because any
cables that carry a current, no matter how small, produce their own voltage as
the current travels along them. The level of voltage that is produced by the cable
will depend on its resistance and the current passing through it, but this never-
theless results in a difference in voltage from one end of the cable to the other.
Because all studio equipment (unless it’s all contained inside a Digital Audio
Workstation) requires cables to carry the audio signal to and from the mixing desk,
the additional voltage introduced by the cables is then transmitted around the instru-
ments from the mixing desk and through to earth. This produces a continual loop
resulting in an electromagnetic fi eld that surrounds all the cables. This fi eld intro-
duces an electrical hum into the signal, an effect known as ‘ground hum . The best
way to reduce this is by using professional-quality ‘balanced’ cables, although not all
equipment, particularly equipment intended for the home studio, uses this form of
cable. Home studio equipment tends to use ‘unbalanced’ cables and connectors.
The distinction between balanced and unbalanced cable is determined by the
termination connectors at each end. Cables terminated with mono jack or
Cables, Mixing Desks and
Effects Busses
CHAPTER 3
CHAPTER 3
I don’t just use a (mixing) desk to mix sounds together
I use it as a creative tool
Juan Atkins
PART 1
Technology and Theory
58
phono connectors are unbalanced, while stereo Tip –Ring–Sleeve (TRS) jack
connections or extra long run (XLR) connections will be found on balanced
cables. Examples of these are shown in Figure 3.1 .
All unbalanced cables are made up of two internal wires contained within the
outer plastic or rubber core of the wire (known as the earth screen). One of
these internal wires carries the audio signal and is connected to the tip of the
connector while the other carries the ground signal and is connected directly to
the connector sleeve. The signal is therefore grounded at each end of the cable
Mono jack
Stereo jack
Male XLR
Audio
Ground/earth
Positive audio (ve)
Ground/earth
Negative audio (ve)
Negative audio (ve)
Ground/earth
Positive audio (ve)
Head on XLR view
FIGURE 3.1
Mono, stereo jack and XLR connectors
Cables, Mixing Desks and Effects Busses
CHAPTER 3 59
helping to prevent any interference from the device itself, but is still susceptible
to electromagnetic interference as it is transmitted from one device to another.
Because of this, most professional studios use balanced cables with XLR or TRS
terminating connections, if the equipment they connect to supports it.
TRS connections do not necessarily mean that the cable is balanced as they can be
used to carry a stereo signal (left channel, right channel and earth) but in a studio envi-
ronment they are more commonly used to transfer a mono signal.
Balanced cables contain three wires within the outer screen. In this confi gura-
tion a single wire is still used as a ground but the other two wires carry the
audio signal, one of which is a phase-inverted version of the original. When
this is received by a device, the phase-inverted signal is put back into phase
with the original and the two are added together. As a result, any interference
introduced is cancelled out when the two signals are summed together. This
is similar to the way two oscillators that are in phase cancel each other out, as
described in here in this chapter. That’s the theory. In practice, although this
reduces the problem, phase cancellation rarely removes all of the interference.
Another favourable advantage of using balanced cables is that they also utilize
a more powerful signal level. Commonly referred to as a professional standard,
a balanced signal uses a signal level of 4 dBu rather than the semi-profes-
sional signal level of 10 dBV. The reasons for this are more the subject matter
of electrical engineering than music, and although it’s not necessarily impor-
tant to understand this, a short explanation is given in the text box. If you’re
not interested you can skip this box because all you really need to know is that
4 dBu signals have a hotter, louder signal than 10 dBV signals and are gen-
erally preferred as the signal is over 11 dB hotter, so the chance of capturing a
poor signal is reduced.
Before light-emitting diode (LED) and liquid crystal display (LCD) displays
appeared on musical gear, audio engineers used volume unit (VU) meters to
measure audio signals. These were featured on any hardware that could receive
an input. This meant that every VU meter had to give the same reading for the
same signal level no matter who manufactured the device. If this were not the
case, different equipment within the same studio would have different signal
levels. Consequently, engineers decided that if 1 milliwatt (mW) was travelling
through the circuitry then the VU meter should read 0 dB. Hence, 0 dB VU was
referred to as 0 dBm (with m standing for milliwatt).
Today’s audio engineering societies are no longer concerned with using a refer-
ence level of milliwatt because the power levels today are much higher, so the
level of 0 dBm is now obsolete and we use voltage levels instead. To convert
this into an equivalent voltage level, the impedance has to be specifi ed, which
PART 1
Technology and Theory
60
in this case is 600 Ohms. For those with some prior electrical knowledge it can
be calculated as follows:
P V
2 / R
0.001 W V
2 /600 W
V
2 0.001 W 600 W
V⫽⫻0.001W 600 W
For the layman, the sum of this equation equals 0.775 Volts and that’s all you
need to know.
The value of 0.775 is now used as the reference voltage and is referred to in
dBu rather than dBm. Although it was originally referred to as dBv it was often
confused with the reference level of dBV (notice the upper case V), so the suffi x
u is used in its place. This is only the reference level, though, and all profes-
sional equipment will output a level that is 4 dB, which is where we derive
the 4 dBu standard. Consequently, on professional equipment, the zero level
on the meters actually signifi es that it is receiving a 4 dBu signal.
However, some hardware engineers agreed that it would be simpler to use
1 Volt as the reference instead, which is where the dBV standard originates.
Unlike professional equipment, which uses the 4 dBu output level, unbal-
anced equipment outputs at 0.316 Volts, equivalent to 10 dBV. Therefore, on
semi-professional equipment the zero level on the meters signifi es that they are
receiving a 10 dBV signal. If the professional and semi-professional signals
are compared, the professional voltage of 0.775 V is considerably higher than
the 0.316 Volts generated by consumer equipment. When converted to decibel
this results in an 11.8 dB difference between the two.
Despite the difference in the levels of the two signals, in many cases it is still
possible to connect a balanced signal to an unbalanced sampler/soundcard/
mixer with the immediate benefi t that the signal that is captured is 11.8 dB
hotter. Although this usually results in the unbalanced recorder’s levels jump-
ing off the scale, whether the signal is actually distorting should be based on
whether this distortion is audible. Most recorders employ a safety buffer by set-
ting the clipping meters below the maximum signal level. This is, of course, a
one-way connection from a balanced signal to an unbalanced piece of hard-
ware, and it isn’t a good idea to work with an unbalanced signal connecting to
a balanced recorder because you’ll end up with a poor input signal level and
any attempts to boost the signal further could introduce noise.
Within the typical home studio environment, it is most likely that the equip-
ment will be of the unbalanced type; therefore, the use of unbalanced connec-
tions is unavoidable. If this is the case, it’s prudent to take some precautions
that will prevent the introduction of unwanted electromagnetic interference
but this can be incredibly diffi cult.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 61
While the simplest solution would be to disconnect the earth from the power
supply, in effect breaking the ground loop, this should be avoided at all costs.
Keep in mind that the whole point of an earth system is to pass the current
directly to ground rather than you if there’s a fault. Human beings make
remarkably good earth terminals and as electricity will always take the quickest
route it can fi nd to reach earth, it can be an incredibly painful (and sometimes
deadly) experience to present yourself as a short cut.
A less suicidal technique is to remove the earth connection from one end of
audio cable. This breaks the loop, but it has the disadvantage that it can make
the cable more susceptible to radio frequency (RF) interference. In other words,
the cable would be capable of receiving signals from passing police cars, taxis,
mobile phones and any nearby citizens ’ band radios. While this could be use-
ful if you want to base your music around Scanners previous work, it’s some-
thing you’ll want to avoid.
Although in a majority of cases hum will be caused by the electromagnetic
eld, it can also be the result of a number of other factors combined. To begin
with it’s worthwhile ensuring that the mains and audio cables are wrapped
separately from one another and kept as far away from each other as possible.
Mains cables create a higher electromagnetic fi eld due to their large current,
and if they are bound together with cables carrying an audio signal, serious
hum can be introduced.
Transformers also generate high electromagnetic fi elds that cause interference,
and although you may not think that you have any in the studio, both ampli-
ers and mixing desks use them. Consequently, amplifi ers should be kept at a
good distance from other equipment, especially sensitive equipment such as
microphone pre-amps. If the amplifi ers are rack-mounted, there should be a
minimum space of 4-Rack Units between the amplifi er and any other devices.
This same principle also applies to rack-mounted mixing desks, which should
ideally be placed in a rack of their own or kept on a desk. If the rack that is
used is constructed from metal and metal screws hold the mixing desk in place;
the result is the same as if the instruments were grounding from another source
and yet more hum is introduced. Preferably, plastic washers and screw hous-
ings should be used, as these isolate the unit from the rack.
If, after all these possible sources have been eliminated, hum is still present,
the only viable way of removing or further reducing it is to connect devices
together digitally, or invest in a professional mains suppressor. This should be
sourced from a professional studio supplier rather than from the local electri-
cal hardware or car superstore, as suppressors sold for use within a studio are
specifi cally manufactured for this purpose, whereas a typical mains suppressor
is designed to suppress only the small amounts of hum that are typically asso-
ciated with normal household equipment.
Digital connections can be used as an alternative to analogue cables if the
sampler/soundcard/mixer allows it. This has the immediate benefi t that no
noise will be introduced into the signal with the additional benefi t that this
PART 1
Technology and Theory
62
connection can also be used by the beat-slicing software to transmit loops
to and from an external hardware sampler. Sampler-to-software connectiv-
ity is usually accomplished via a direct SCSI interface, so there is little need
to be concerned about the digital standards, but on occasion it may be prefer-
able to transmit the results through true digital interfaces such as Alesis digi-
tal audio tape (ADAT), Tascam digital interface (T-DIF), Sony/Philips digital
interface (S/PDIF) or Audio Engineering Society/European Broadcasting Union
(AES–EBU).
The problem is that digital interfacing is more complex than analogue interfac-
ing because the transmitted audio data must be decoded properly. This means
that the bit rate, sample rate, sample start and end points, and the left and right
channels must be coded in such a way that the receiving device can make sense
of it all. The problem with this is that digital interfaces appear in various forms,
including (among many others) Yamaha’s Y formats, Sony’s SDIF, Sony and
Phillips S/PDIF and the AES –EBU standard, none of which are cross-compatible.
In an effort to avoid these interconnection problems, the American Audio
Engineering Society (AAES) combined with the EBU and devised a standard
connection format imaginatively labelled the AES –EBU standard. This requires
a three-pin XLR connection, similar to the balanced analogue equivalent,
although the connection is specifi c to the digital.
Like ‘balanced’ analogue connections AES –EBU is expensive to implement,
so Sony and Philips developed a less expensive ‘unbalanced’ standard known
as S/PDIF. This uses either a pair of phono connectors or an optical TOS-link
interface (Toshiba Optical Source). Most recent samplers and soundcards use
a TOS-link or phono connection to transmit digital information to and from
other devices.
The great thing about standards is that there are plenty of them
With compatible interfaces between two devices, both the receiving and the
transmitting device must be clocked together so that they are fully synchro-
nized. This ensures that they can communicate with one another. If the devices
are not synchronized, the receiving device will not know when to expect an
incoming signal, producing ‘jitter’. The resulting effect this has on audio is dif-
cult to describe but rather than fi ll up the book’s accompanying CD with the
ensuing ear-piercing noises, it’s probably best explained as an annoying high-
frequency racket mixed with an unstable stereo image. To avoid this, most pro-
fessional studios use an external clock generator to synchronize multiple digital
units together correctly. This is similar in most respects to a typical multi-MIDI
interface, bar the fact that it generates and sends out word clock (often abbrevi-
ated to WCLK but still pronounced word clock) messages simultaneously to all
devices to keep them clocked together.
The WCLK message works by sending a one-bit signal down the digital cable,
resulting in a square wave that is received in all the attached devices. When
the signal is decompiled by the receiving device, the peaks and troughs of the
Cables, Mixing Desks and Effects Busses
CHAPTER 3 63
square wave denote the left and right channels while the width between each
pulse of the wave determines the clock rate.
These stand-alone WCLK generators can be expensive, so within a home studio
set-up the digital mixing desk or soundcard usually generates the WCLK mes-
sage, with the devices daisy-chained together to receive the signal. For instance,
if the soundcard is used to generate the clock rate, the WCLK message could be
sent to an effects device, through this into a microphone pre-amplifi er, and so
forth before the signal returns back into the soundcard, creating a loop. The
principle is similar to the way that numerous MIDI devices are daisy-chained
together, as discussed in Chapter 1. As with daisy-chained MIDI devices,
though, the signal weakens as it passes through each device; therefore, if the
signal is to pass through more than four devices, it’s prudent to use a WCLK
amplifi er to keep the signal powerful enough to prevent any jitter.
Provided that the clock rate is correctly transmitted and received, another
important factor to consider is the sample rate. When recording, both the
transmitting and the receiving devices must be locked to the same sample rate,
otherwise the recorder may refuse to go into record mode. Also, unless you are
recording the fi nal master, any Serial Copyright Management System (SCMS)
features should be disabled.
SCMS was implemented onto all consumer digital connections to reduce the
possibility of music piracy. This allows only one digital copy to be made from
the original. It does this by inserting a ‘copyright fl ag ’ into the fi rst couple of
bytes that are transmitted to the recording device. If the recorder recognizes
this fl ag it will disable the device’s record functions. This is obviously going to
cause serious problems if you need to edit a second-generation copy because
unless the recorder allows you to disable SCMS, you will not be allowed to
record the results digitally. Thus, if you plan to transfer and edit data digitally,
it is vital to ensure that you can disable the SCMS system.
If you own a digital audio tape (DAT) machine that does not allow you to disable
the SCMS protection system then it’s possible to get hold of SCMS strippers which
remove the fi rst few fl ags of the signal, in effect disabling the SCMS system.
MIXING DESK STRUCTURE
While some mixing desks appear relatively straightforward some professional
desks, such as the Neve Capricorn or the SSL, look more ominous. However,
whatever the level of complexity all types of hardware- and software-based mix-
ing desks operate according to the same principles. With a basic understanding
of the various features and how the channels, busses, subgroups, EQ and aux
send and returns are confi gured, you can get the best out of your equipment,
no matter how large or small the desk is.
The fundamental application of any mixing desk is to take a series of inputs
from external instruments and provide an interface that allows you to adjust
PART 1
Technology and Theory
64
the tonal characteristics of each perspective instrument. These signals are then
culminated together in the desk and fed into the loudspeaker monitors and/
or recorder to produce the fi nished mix. As simple as this premise may be,
though, simply looking at a desk reveals that there’s actually a lot more going
on and to better understand this we need to break the desk down to each indi-
vidual input channel.
Typically, a mixing desk can offer anywhere from two input channels to over a
hundred, depending on its price. Each of these channels (referred to by engi-
neers as a ‘strip’) is designed to accept either a mono or stereo signal from
one musical source and provide some tonal editing features for that one strip.
Clearly, this means that if you have fi ve external hardware instruments, each
with a mono output, you would need a desk with an absolute minimum
total of fi ve ‘strips’ so that you could input each into a separate channel of the
desk. For reasons we’ll touch upon, you should always aim to have as many
channels strips in a mixer as you can afford, no matter how few instruments
you may own.
Generally speaking, the physical inputs for each channel are located at the rear
of the desk and will consist of a 1/4 jack or XLR connection, or both. This lat-
ter confi guration doesn’t mean that two inputs can be directed into the same
channel strip simultaneously; rather it allows you to choose whether that par-
ticular channel accepts a signal from a jack or an XLR connector. Most desks
will have the capacity to accept a signal of any level, whether it’s mic-level
( 60 dBu) or line level ( 10 dBV or 4 dBu) and whether the cables are
balanced or unbalanced.
Some of the older mixing desks may describe these inputs as being Low-Z or
Hi-Z but all this is describing is the input impedance of that particular channel.
If Hi-Z is used then the impedance is higher to accept unbalanced line-level
signals, while if Low-Z is used the impedance is lower to accept balanced ones.
As all mixing desks operate at line level to keep the signal to noise ratio at a min-
imum, once a signal enters the desk it is fi rst directed to the pre-amplifi er stage
to bring the incoming signal up to operating level of the desk. Although this pre-
amp stage isn’t particularly necessary with line-level signals as most desks use this
as their nominal operating level, it is required to bring the relatively low levels
generated by most microphones to a more respectable level for mixing. This pre-
amp will have an associated rotary gain control on the facia of the desk (often
called pots – an acronym for potentiometer) labelled ‘trim’ or ‘gain’. As its name
would suggest, this allows you to adjust the volume of the signal entering the
channels input by increasing the amount of amplifi cation at the pre-amp.
To reduce the possibility of the mixer introducing noise into the channel you
should ensure that the signal entering the mixer is as high as possible rather
than input a low-level signal and boost it at pre-amp stage of the mixer. Keep
in mind that a good signal entering the mixer before the pre-amp is more likely
to remain noise-free as it passes through the rest of the mixer.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 65
Some mixers may also offer a ‘pad’ switch at the input stage, which is used to
attenuate the incoming signal before it reaches the pre-amplifi er. The amount of
attenuation applied is commonly fi xed at 12 or 24 dB and is used to prevent any
hot signals entering the inputs from overdriving the desk. A typical example of
this may be where the mixer operates at the ‘semi-professional’ 10 dBV and one
of the instruments connected to it outputs at the ‘professional’ 4 dBu. As we’ve
previously touched upon, this would mean that the input level at the desk would
be 11.8 dB higher than the desk’s intended operational level. Obviously this will
result in distortion, but by activating the pad switch, the incoming signal could
be attenuated by 24 dB and the volume could then be made back up to unity
gain with the trim pot on that particular channel.
1 Busses refer to the various signal paths within a mixer that the inputted audio can travel
through.
The term Unity Gain means that one or all of the mixer channels volume faders are set
at 0 dB, the loudest signal possible before you begin to move into the mixers headroom
and possible distortion.
On top of this, some high-end mixers may also offer a phantom power switch,
a ground lift switch and/or a phase reverse switch, the functions of which are
described below:
Phantom power is used to supply capacitor microphones with the volt-
age they require to operate. This also means that the mixer has to use
its own amplifi er to increase its signal level, and generally speaking, the
amp circuits in mixers will not be as good as those found in stand-alone
microphone pre-amplifi ers.
A ground lift switch will only be featured on mixers that accept balanced
XLR connections, and when this is activated, it disables pin number one
on the connector from the mixer’s ground circuitry. Again, as touched
upon this will eliminate any ground hum or extraneous noises.
Phase reverse – sometimes marked by a small circle with a diagonal line
through it – switches the polarity of the incoming signal, which can
have a multitude of uses. Typically, they’re included on mixing desks to
prevent any recordings taken simultaneously with a number of micro-
phones from interfering with one another, since the sound of one
microphone can weaken the sound from a second microphone that’s
positioned further away. Subsequently, phase-reverse switches are only
usually found next to XLR microphone inputs since it’s easy to imple-
ment on these – the switch simply swaps over the two signal pins.
Following this input section, the signal is commonly routed through a mute
switch (allowing you to instantly mute the channel) and into the insert buss.
1
PART 1
Technology and Theory
66
These allow you to insert processors such as compressors or noise gates to clean
up or bring a signal under control before it’s routed into the EQ. On a mix-
ing desk these are the most important aspect as they allow you to modify the
tonal content of a sound for creative applications or, more commonly, so that
the timbre fi ts into a mix better. As a result, when looking for a mixing desk it’s
worthwhile seeing how well equipped the EQ section is as this will give a good
impression of how useful the desk will be to you. Most cheap consumer mix-
ers will only offer a high and low EQ section, which can be used to increase or
reduce the gain at a predetermined frequency. More expensive desks will offer
sweepable EQs that allow you to select and boost any frequency you choose and
offer fi lters so that you can remove all the frequencies above or below a certain
frequency. This section may also offer a pre- or post-EQ button that allows you
to re-route the signal path to bypass the EQ section (pre-EQ) and move directly
to the pre-fader buss, or move through the EQ section (post-EQ) and then into
the pre-fader buss. Obviously, this allows you to bypass the EQ section once in a
while to determine the effect that any EQ adjustments have had on a sound.
The pre-fader buss allows you to bypass the channel’s volume fader and route
the signal at its nominal volume to the auxiliary buss or go through the faders
and then to the auxiliary buss. Fundamentally, an aux buss is a way of routing
the channel’s signal to physical outputs usually labelled as ‘aux outs ’ located at
the rear of the desk. The purpose behind this is to send the channel’s signal out
to an external effect and then return the results back into the desk (we’ll look
more closely at aux and insert effects in a moment). The number of aux busses
featured on a desk varies widely from 1 to over 20, and not all desks will give
the option of pre- or post-fader aux sends and may be hardwired to pre-fader.
This means that you would have no control over the level that’s sent to the aux
buss via the channel fader; instead the aux send control would be used to con-
trol the level.
After this aux buss section, the signal is passed through onto the volume con-
trol for the channel. These can sometimes appear in the form of rotary con-
trollers on cheaper desks but generally they use faders with a 60 or 100 mm
throw.
2 Although these shouldn’t particularly require any real explanation, it’s
astounding how many users believe that they are used to increase the amplifi -
cation of a particular channel. This isn’t the case at all, since if you look at any
desk the faders are marked 0 dB close to the top of their throw instead of at the
bottom. This means that you’re not amplifying the incoming signal by increas-
ing the fader’s position; rather you’re allowing more of the original signal travel
through the fader’s circuitry that increases the gain on that channel.
After the faders, the resulting signal then travels through the panning buss
(allowing you to pan the signal left and right) and into a subgroup buss. The
number of subgroup busses depends entirely on the price and model of mixer
but essentially these allow you to group a number of fader positions together
2 Throw ’ refers to the two extremes of a fader’s movement from minimum to maximum.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 67
and control them all with just one fader control. A typical application of this is
if the kick, snare, claps, hi-hats and cymbals each have a channel of their own in
the mixer. By then setting each individual element to its respective volume and
required EQ (in effect mixing down the drum loop so that it sounds right) all
Input
Pad
Pre-amp
Mute
EQ
Aux bus
Pan
Subgroup
Master fader
Gain
Insert effects
Bypass EQ
Channel
fader
Effects Aux return
Post-fader aux
FIGURE 3.2
The typical structure of a mixing desk
PART 1
Technology and Theory
68
of these levels can be routed to a single group track whereby moving just one
subgroup fader, the volume of the entire drum sub-mix can be adjusted to suit
the rest of the mix.
Finally, these subgroup channels, if used, along with the rest of the ‘free’
faders, are combined together into the main stereo mix bus, which passes to the
main mix fader, allowing you to adjust the overall level of the mix. It’s at this
stage that things can become more complicated since the direction and options
available to this stereo buss are entirely dependent on the mixer in question.
In the majority of smaller, less expensive mixers the buss will simply pass
through a pan control and then out to the mixer’s physical outputs. Semi-
professional desks may pass through the pan control and an EQ before sending
the mix out to the main physical outputs. Professional desks may go through
panning, EQ and then split the buss into any number of other stereo busses,
allowing you to send the mix to not only the speakers but also the headphones,
another pair of monitors (situated in the recording room) and a recording
device .
Ultimately, the more EQ and routing options a desk has to offer, the more cre-
ative you can become. But, at the same time, the more features on offer, the
more expensive it will be. Naturally, most of these concerns are circumvented
with audio sequencers as they generally offer everything a professional desk does
with the only limitation being the number of physical connections dictated by
the soundcard you have fi tted. Nevertheless, this doesn’t deter many from rely-
ing entirely on sequencers as any external instruments can always be recorded as
audio and placed on their own track and have software effects applied.
ROUTING EFFECTS AND PROCESSORS
Understanding the internal buss structure of a typical mixing desk (hardware or
software) is only part of the puzzle because when it comes to actually process-
ing signals, the insert/aux buss system they are transferred through will often
dictate the overall results. Any ‘external’ signal processing can be divided into
two groups: processors and effects. The difference between these two is rela-
tively simple but important to recognize.
All effects will utilize a wet/dry control (wet is the affected signal and dry is
the unaffected signal) that allows you to confi gure how much of the original
signal remains unaffected and how much is affected. A typical example of
this would be for reverb whereby you don’t necessarily want to run the entire
audio through the effect otherwise it could appear swamped in decays; instead
you would want to affect the signal by a small amount only. For instance, you
may keep 75% of the original signal and apply just 25% of the reverb effect.
Conversely, all processors, such as compressors, noise gates and limiters, are
designed to work with 100% of the signal and thus have no wet/dry parameter.
This is simply because in many instances there would be little point trying to
control the dynamics of just some of the signal because the rest of it would
Cables, Mixing Desks and Effects Busses
CHAPTER 3 69
still retain its dynamic range. Nevertheless, due to the different nature of these
two processes, a mixing desk uses two different buss systems to access them: an
insert buss for processors and an auxiliary buss for effects. The difference and
reasons behind using these two busses will become clear as we look at both.
AUXILIARY BUSS
Nearly all mixers will feature the aux buss after the EQ rather than before since it’s
considered that effects are applied at the fi nal stages of a mix to add the fi nal ‘pol-
ish’. At this point, a percentage of the signal can be routed through the aux bus and
to a series of aux outs located at the back of the mixer. Each aux out is connected to
an effects unit and the signal is then returned into the desk. With the aux potenti-
ometer on the channel strip set at zero, the audio signal ignores the bus and moves
directly onto the fader. However, by gradually increasing the aux pot you can con-
trol how much of the signal is split and routed to the aux buss and onto the effect.
The more this is increased the more audio signal will be directed to the aux buss.
The aux return (the effected signal) is then returned to a separate channel on the
mixing desk, which can then be mixed with the original channel .
As you can see from the above diagram, two separate audio leads are required.
One has to carry the signal from the mixer and into the effects while the other
has to return the effect back into the desk. Additionally, the effected signal
cannot be returned into the original channel since there will still be some dry
audio at the fader; instead they are usually returned into specifi c aux returns.
These signals are then bussed through the mixer into additional mixer ‘sub-
group’ channels. These are most commonly a group of volume faders or pots
(one for each return), which permit you to balance the volume of the effected
signal with the original dry channel.
Note that when accessing effects from a mixing desk, the mix control on the
effects unit should generally be set to 100% wet since you can control the
amount of wet/dry signal with the mixing desk itself.
While retuning the effected signal to its predefi ned aux channel return may
seem sensible, very few artists will actually bother using the aux returns at all,
Effects unit
Inputs
Aux out
Aux return
FIGURE 3.3
An aux send confi guration
PART 1
Technology and Theory
70
instead preferring to return the signal to a normal free mixing channel. This
opens up a whole new realm of possibilities since the returning effect has
access to the channels pre-amp, an insert, EQ, pan and volume. For example,
a returned reverb effect could be pushed into distortion by increasing the
mixers’ pre-amp, or the EQ could be used as a low- and high-frequency fi lter if
the effects unit doesn’t feature them.
Another benefi t of using aux sends is that each channel on the mixer will have
access to the same auxiliary busses, meaning that signals from other chan-
nels could also be sent down the same auxiliary buss to the effects unit. This
approach can be especially useful when using computer-based audio sequenc-
ers since opening multiple instances of the same effect can use up a propor-
tionate amount of the CPU. Instead, you can simply open up one effect and
send each channel to the same effect, which also saves time spent in setting
a number of effects units up, all of which share the same parameters. What’s
more, in a hardware situation, as only a portion of the signal is being sent
to the effect, there is less noise and signal degradation, and as the effects is
returned to a separate channel, you have total control over the wet and dry lev-
els through the mixer.
One fi nal aspect of aux busses is that they can sometimes be confi gured to
operate the aux bus either pre- or post-fader. Of the two, pre-fader is possibly
the least used since the signal is split into two before it reaches the fader. This
means that if you were to reduce the fader you would not reduce the gain of
the signal being bussed to the effect, so with every fader adjustment you would
also need to readjust the aux pot to suit. This can have its uses as it allows
you to reduce the volume of the dry signal while leaving the wet at a constant
volume, but generally post-fader is much more useful while mixing. Using
post-fader, reducing the channel’s fader will also systematically reduce the aux-
iliary send too, saving the need to continually adjust the aux buss send level
whenever you change the volume.
INSERT BUSS
Unlike aux busses, insert busses are positioned just before the EQ and are
designed for use with processors rather than with effects. This is because pro-
cessors require the entire signal and there is no point in applying compres-
sion or gating to only part of a signal! Typically, a processor takes the output
of a microphone pre-amp and feeds it directly into a compressor to prevent any
clipping. The resulting compressed signal would then be connected into the mix-
ing desk’s channel so that the signal fl ows out of the microphone pre-amp into
the compressor and then fi nally into the desk for mixing. This, however, is a rather
convoluted way of working because to compress the output of a synthesizer, for
example, requires that you scrabble around at the back of the rack and rewire all
the necessary connections. In addition, if the output from the synthesizer was par-
ticularly low there is no way of increasing its gain into the compressor.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 71
This can be avoided if the mixing desk features insert points, which com-
monly appear directly after the pre-amp stage. Consequently, the mixer’s
pre-amp stage can be used to increase the signal level from the synthesizer
before it’s passed on to the insert bus. This is then passed into the external
compressor where the signal is squashed before being returned to the same
mixing channel .
On occasion compressors may be accessed through the aux send bus rather
than the insert bus. With this confi guration, you can increase the overall level
of the signal by mixing the compressed with the uncompressed signal. This can
help to preserve the transients of the signal.
This is accomplished using only one physical connection – an insert cable – on
the mixing desk. Fundamentally, this is an ordinary audio cable with a T S
(Tip–Sleeve) jack at one end and a R S (Ring –Sleeve) jack at the other. The
tip is used to transmit the signal from the desk’s channel to the compressor
and is returned back into the same mixing channel through the ring connec-
tion of the same cable. Most mixers return this processed signal before the EQ
and volume bus, allowing shaping of the overall signal after processing.
Tonal shaping after processing won’t necessarily create any problems if the pro-
cessor happens to be a noise gate, but if it’s a compressor, it raises the issue that
if the EQ were raised after compression, the signal would be increased past the
level of the previously controlled dynamics. To understand the issue, we fi rst
need to consider what would happen if the compressor was returned after the
EQ section.
To begin with, the EQ section would be used to correct the tone of the incom-
ing signal on that particular channel before it was sent to the compressor. This
compressor would then be adjusted so that it controls the dynamics of the
timbre and the subsequent EQ before the signal is returned to the desk. Due
to the nature of compression, however, the tonal content of the sound will be
modifi ed, so it would need to be EQ’d again. This subsequent EQ will reintro-
duce the peaks that were previously controlled by the compressor, so the com-
pressor must be adjusted again. This would alter the tonal content, so it would
Processor
Inputs
Insert out and return
FIGURE 3.4
An insert confi guration
PART 1
Technology and Theory
72
need EQ’ing yet again, and so on. Thus, an ever-increasing circle of EQ, com-
pression, EQ, compression must continue until the signal is overcompressed
or pushing beyond unity gain, distorting the desk. By inserting the compressor
before the EQ section, this continual circle can be avoided.
With the compressor positioned before the EQ, the issue of destroying the
previously controlled dynamics when boosting the EQ is still raised, but this
shouldn’t be the case provided the EQ is correctly used. As touched upon in
earlier chapters, it’s in our nature to perceive louder sounds to be infi nitely
better than quieter, even if the louder sound is tonally worse than the quieter
one. Thus, when working with EQ it’s necessary to reduce the channel’s fader
while boosting any frequencies so that they remain at the same volume as the
un-EQ’d version. Used in this way, when bypassing the modifi ed EQ to com-
pare it with the unmodifi ed version, a difference in volume cannot cloud your
judgement. As a result, the signal level of the modifi ed EQ is the same as the
unmodifi ed version, so the dynamics from the compressor remain the same.
This approach maintains a more natural sound, so it is worth experimenting
by placing the EQ before the compressor. For example, using this confi guration
the EQ could be used to create a frequency-selective compressor. In this set-up,
the loudest frequencies control the compressor’s action. By boosting quieter
frequencies so that they instead breach the compressor’s threshold it’s possible
to change the dynamic action of a drum loop or motif. In fact, it’s important
to note that the order of any processing or effects, if they’re inserted, can have a
dramatic infl uence on the overall sound.
From a theoretical mixing point of view, effects should not be chained together
in series. This is because although it is perfectly feasible to route reverb, delay,
chorus, fl angers and phasers into a mix as inserts, chaining effects in this way
can introduce a number of problems. If an effect is used as an insert, 100%
of the signal will be directed into the external effects unit, and because many
units are introduced with low levels of noise while also degrading the signal’s
overall quality, both of the effected signal and the noise will be returned into
the desk. What’s more, the control over the relationship between dry audio and
wet effects would be from the effects interface and, in many instances, greater
control will be required.
Many effects use a single control to adjust the balance between dry and wet
levels, so as the wet level is increased, the dry level decreases proportionally.
Equally, increasing the dry level proportionally decreases the wet level. While
this may not initially appear problematic, if you decided to thicken out a trance
lead with delay or reverb but wanted to keep the same amount of dry level in
the mix that you already have, it isn’t going to be possible. As soon as the wet-
ness factor is increased the dry level will decrease proportionally which may
result in a more wet than dry sound. This can cause the transients of each hit
to be washed over by the subsequent delays or reverb tail from the preceding
notes, reducing the impact of the sound.
Cables, Mixing Desks and Effects Busses
CHAPTER 3 73
Nevertheless, by using effects as inserts and chaining them together in series it’s
possible to create new effects because both dry and wet results from each pre-
ceding effect would be transformed by the ones that follow them. This opens
up a whole world of experimental and creative opportunity that could easily
digest the rest of this book, so rather than list all of the possible combinations
(as if I could) we’ll examine the reasoning behind why, theoretically at least,
some processors and effects should precede others.
Gate > Compressor > EQ > Effects
Normally, to maintain a ‘natural’ sound a gate should always appear fi rst in the
line of any processor or effects since they’re used to remove unwanted noise
from the signal before it’s compressed, EQ’d or effected. While it is perfectly
feasible to place the compressor before the gate as it would make little differ-
ence to the actual sound it’s unwise to do so. This is because the compressor
reduces the dynamic range of signal and, as a gate works by monitoring the
dynamic range and removing artefacts below a certain volume, placing com-
pression fi rst, the gate would be more diffi cult to set up and may remove some
of the signal you wish to keep. For reasons we’ve already touched upon, the EQ
should then appear after compression and the effects should follow the EQ sec-
tion as they’re usually the last aspect in the mixing chain.
Gate > Compressor > Effects > EQ
Again the beginning aspects of this arrangement will keep the signal natural
but by placing the EQ after the effects it can be used to sculpt the tonal quali-
ties produced by the effect. For example, if the effect following the compres-
sion is distortion, the compressor will even out the signal level making the
distortion effect more noticeable on the decays of notes. Additionally, since
distortion will introduce more harmonics into the signal, some of which can
be unpleasant, it can be carefully sculpted with the EQ unit to produce a more
controlled pleasing result.
Gate > Compressor > EQ > Effects > Effects
The beginning of this signal chain will produce the most natural results but the
order of the effects afterwards will determine the outcome. For instance, if you
were to place reverb before distortion, the reverb tails will be treated to distortion,
but if it were placed afterwards, the effect would not be as strong since the reverbs
tail would not be treated. Similarly, if delay were placed after distortion, the sub-
sequent delays would be of the distorted signal, while if the delay came fi rst, the
distortion would be applied to the delays producing a different sound altogether. If
anger were added to this set-up, things become even more complicated since this
effect is essentially a modulated comb fi lter. By placing it after distortion the fl anger
would comb fi lter the distorted signal producing a rather spectacular phased effect,
yet if it were placed before, the effect would vary the intensity of the distortion.
To go further, if the fl anger were placed after distortion but before reverb, the
ange effect would contain some distorted frequencies but the reverbs tail
PART 1
Technology and Theory
74
would wash over the fl anger, diluting the effect but producing a reverb that
modulates as if it were controlled with an LFO. The possibilities here are, as
they say, endless, and it’s worth experimenting by placing the effects in a differ-
ent order to create new effects.
Gate > Effects > EQ > Effects > Compressor
While the aforementioned method of placing one effect after the other can be
useful, the subsequent results can be quite heavy-handed, but by placing an EQ
after the fi rst series of effects, the tonal content can be modifi ed so that there
isn’t an uncontrolled mess of frequencies entering the second series of effects,
muddying the effect further. Additionally, by placing a compressor at the end
of the arrangement, any peaking frequencies introduced by a fl anger, following
distortion or similar arrangement, can be brought back under control with the
compressor sat at the end of the line.
Compressor > EQ > Effects > Gate
We’ve already discussed the effects of placing a compressor before the gate
and EQ, but using this confi guration, the compressor could be used to con-
trol the dynamics of sounds before they were EQ’d and subsequently affected.
However, by placing the gate after the effects would mean that the effected sig-
nals could be treated to a gate effect. Although there is a long list of possible
uses for this if you use a little imagination, possibly the most common tech-
nique is to apply reverb to a drum kick or trance/techno/house lead and then
use the following gate to remove the reverb’s tail. This has the effect of thicken-
ing out the sound without turning the result into a washed-over mush.
Gate > Effects > Compressor > EQ
Although it is generally accepted that the compressor should come before
effects, placing it directly after can have its uses. For instance, if a fi lter effect
has been used to boost some frequencies and this has been followed by cho-
rus or fl anger, there may be clipping so that the compressor can be used to
bring these under control before they’re shaped tonally with the EQ. Notably,
though, placing compression after distortion will have little effect since distor-
tion effects tend to reduce the dynamic range anyway.
Above all, though, keep in mind that setting effects in an order that would not the-
oretically produce great results most probably will in practice. Indeed, it’s the art-
ists who are willing to experiment and often produce the most memorable effects
on records, and with many of today’s audio sequencers offering a multitude of free
effects, experimentation is cheap but the results may be priceless.
AUTOMATION
A fi nal yet vital aspect of mixing desks appears in the form of mix automation.
We’ve already touched upon the importance of movement within programmed
Cables, Mixing Desks and Effects Busses
CHAPTER 3 75
sounds, but it’s also important to manipulate a timbre constantly through-
out the music. Techno would be nowhere if it were not possible to program a
mixer or effects unit to gradually change parameters during the course of the
track. With a mixer that features automation, these parameter changes can be
recorded as data (usually MIDI) into a sequencer which when played back to
the mixer forces it to either jump or gradually move to the new settings.
Originally mix automation was carried out by three or four engineers sat by
the faders/effects and adjusting the relevant parameters when they received a
nod from the producer. This included riding the desk’s volume faders through-
out the mix to counter any volume inconsistencies. As a result any param-
eter changes had to be performed perfectly in one pass since the outputs of
the desk are connected directly into a recording device. If you made a mistake,
then you had to do it all again, and again, and again until it was right. In fact,
this approach was the only option available when dance music fi rst began to
develop and all tracks of that time will have the fi lters or parameters tweaked
live while recording direct to tape. This type of approach is almost impossible
today, however, since the development of dance music has embraced the latest
forms of mixer automation so much so that it isn’t unusual to have fi ve or six
parameters changing at once, or for the mixer and effects units to jump to a
whole new range of settings for different parts of the track.
Mix automation is only featured on high-end mixing desks due to the addi-
tional circuitry involved, and the more parameters that can be automated
the more expensive the desk will generally be. Nevertheless, mix automation
appears in two forms: VCA and motorized. Each of these perform the same
functions but in a slightly different way. Whereas motorized automation
utilizes motors on each fader so that they physically move to the new posi-
tions, VCA faders remain in the same position while the relative parameters
change. This is accomplished by the faders generating MIDI information rather
than audio. This is then transferred to the desk’s computer which adjusts the
volume. While this does mean that you have to look at a computer screen for
the ‘real’ position of the faders, it’s often preferred by many professional stu-
dios since motorized faders can be quite noisy when they move.
Alongside automating the volume, most desks will also permit you to automate
the muting of each channel. This can be particularly useful when tracks are not
currently playing in the arrangement, since muting the channel will remove the
possibility of hiss on the mixer’s channel. On top of this, some of the consider-
ably expensive desks also allow you to automate the send and return system
and store snapshots (or ‘scenes’) of mixes. Essentially, these are a capture of
the current fader, EQ and mute settings which can be recalled at any time by
sending the respective data to the desk. Despite the fact that these features are
fairly essential to mixing, they are only available on expensive desks and it is
not possible to automate any effects parameters. For this type of automation,
software sequencers offer much better potential.
PART 1
Technology and Theory
76
Most audio capable MIDI sequencers will offer mix automation, but unlike
hardware, this is not limited to just the volume, muting and send/return system.
It’s usually possible to also automate panning, EQ along with all the param-
eters on virtual studio technology (VST) instruments and plug-in effects or
processors. These can usually be identifi ed by an R (Read automation) and W
(Write automation) appearing somewhere on the plug-in’s interface. By activat-
ing the Write button and commencing playback any movements of the plug-in
or mixing desk will be recorded, which can then be played back by activating
the Read button. What’s more, many sequencers also offer the opportunity
to fi nely edit any recorded automation data with an editor. This can be
invaluable to the dance musician since you can fi nely control volume or fi lter
sweeps easily.
Programming Theory
CHAPTER 4 77
77
77
Armed with an understanding of basic synthesis, effects and routing of sounds
through a mixing desk we can look more closely at sound design using all the
elements previously discussed.
Sound design is one of the most vital elements of creating dance music since
the sounds will more often than not determine the overall genre of music.
However, although it would be fair to say that quite a few timbres are gleaned
from other records or sample CDs, there are numerous advantages to program-
ming your own sounds.
Not only do you have much more control over the parameters when compared
to samples, there are no key-range/pitch-shifting restrictions and no copyright
issues. What’s more, you’ll get a lot more synthesizer for your money than if
you simply stick with the presets, and you’ll also open up a whole new host of
avenues that you would otherwise have missed.
To many, the general approach to writing music is to get the general arrange-
ment/idea of the piece down in MIDI form and then begin programming the
sounds to suit the arrangement. It doesn’t necessarily have to be a complete
song before you begin programming but it is helpful if most of the fi nal elem-
ents of the mix are present. This is because when it comes to creating the
timbres it’s preferable to have an idea of the various MIDI fi les that will be play-
ing together so that you can determine each fi le’s overall frequency content.
Keep in mind that the sounds of any instrument/programmed timbres will
occupy a specifi c part of the frequency range. If you programme each indi-
vidually without any thought to the other instruments that will be playing
Programming Theory
CHAPTER 4
CHAPTER 4
It’s ridiculous to think that you’re music will sound
better if you go out and buy the latest keyboard.
People tend to crave more equipment when they don’t
know what they want from their music
A Guy Called Gerald
PART 1
Technology and Theory
78
alongside it, you can wind up programming a host of complex timbre’s that
sound great on their own but when placed in the mix they all confl ict with one
another creating a cluttered mix. With the MIDI down from the start you can
prioritize the instruments depending on the genre of music.
Knowing which frequencies are free after creating the most important parts
is down to experience but to help you on the way it’s often worth employing
a spectral analyser. These are available in software or hardware form and dis-
play the relative volume of each frequency band of every sound that is played
through them. Thus, playing the groove and lead motif into the analyser will
give you an idea of the frequencies that are free ( Figure 4.1 ).
In some cases, once these fundamental elements are created you may fi nd that there
is too much going on in the frequency range so that you can remove the superfl uous
parts or place them elsewhere in the arrangement. Alternatively, you could EQ the
less important parts to thin them out so that they fi t in with the main programmed
elements. Although they may then sound peculiar on their own, it doesn’t particu-
larly matter so long as they sound right in the context of the mix and they’re not
played in isolation during the arrangement. If they are, then it’s prudent to either
use two different versions of the timbre, one for playing in isolation and one for
playing with the rest of the mix, or more simply leave the timbre out when the
other instruments are playing. Indeed, the main key to producing a good track/mix
is to play back sections of the arrangement and programme the sounds so that they
all fi t together agreeably, and it isn’t unusual at this stage to modify the arrange-
ment to accomplish this. By doing so, when it comes to mixing you’re not continu-
ally fi ghting with the EQ to make everything fi t together appropriately. If it sounds
right at the end of the programming stage, not only the mixing will be much easier
but the entire mix will sound more professional due to less cluttering. As a result, it
shouldn’t need explaining that after you’ve programmed each timbre you should
leave it playing back while you work on the next in line, progressively programming
the less important instruments until they are all complete.
12 dB
24 dB
36 dB
44 Hz 86 Hz 170 Hz 340 Hz 670 Hz 1.3 kHz 2.6 kHz 5.1 kHz 10.1 kHz 20 kHz
FIGURE 4.1
As the above spectral analysis reveals, the groove and motif take up specifi c frequencies while leaving others free for
new instruments
Programming Theory
CHAPTER 4 79
TIMBRE EFFECTS
One of the main instigators of creating a cluttered mix or sounds that are too power-
ful is a synthesizers/samplers effects algorithm. All synthesizers and many sample
patches are designed to sound great in isolation to coax you into parting with
money, but when these are all combined into the fi nal mix, the effects tails,
delays, chorus etc. will all combine together to produce a muddy result. Much
of the impact of dance music comes from noticeable spaces within the mix
and a great house motif, for instance, is often created sparingly by keeping
the lengths of the samples short so that there are gaps between the hits. This
adds a dynamic edge because there are sudden shifts from silence to sound.
If, however, there are effects such as delay or reverb applied to the sound from
the source, the gaps in between the notes are smeared over which considerably
lessens the overall impact. Consequently, before even beginning to programme
you should create a user bank of sounds by either copying the presets you like
into the user bank or creating your own and turning any and all effects off.
Naturally, some timbres will benefi t heavily from effects, but if this is heav-
ily effected to make it wider and more in your face, it’s prudent to refrain from
using effects on any other instruments. A lead soaked in reverb/delay/chorus etc.
will have a huge impact if the rest of the instruments are dry, whereas if all the
instruments are soaked in effects, the impact will be signifi cantly lessened. A good
mix/arrangement works in contrast – you can have too much of a good thing and
it’s better to leave the audience gasping for more than gasping for breath. Also, if
many of the instruments are left dry, and if when it comes to mixing you feel that
they need some effecting, you can always introduce the effects at the mixing desk.
Of course, this approach may not always be suitable since the effects signature
may actually contribute heavily toward the timbre. If this is the case, then you’ll
need to consider your approach carefully. For example, if reverb is contribut-
ing to the sound’s colour, ask yourself whether the subsequent tails are really
necessary as these will often reduce the impact. If they’re not required then
it’s prudent to run the timbre through a noise gate which is set to remove the
reverb tail. This way the colour of the sound is not affected but the successive
tails are removed which also prevents it from being moved to the back of the
mix. Similarly, if delay is making the timbre sound great with the programmed
motif, try emulating the delay by ghosting the notes in MIDI and using velocity.
This can often reduce the additional harmonics that are introduced through the
delay running over the next note, and as many delay algorithms are in stereo, it
allows you to keep the effect in mono (more on this in a moment). Naturally,
if this overrun is contributing to the sound’s overall colour then you will have
no option to leave it in but you may need to reconsider other sounds that are
accompanying the part to prevent the overall mix from becoming cluttered.
Another facet of synthesizers and samplers that may result in problems is
through the use of stereo. Most tone modules, keyboards, VST instruments
and samplers will greatly exaggerate the stereo spread to make the individual
sounds appear more impressive. These are created in one of the two ways,
PART 1
Technology and Theory
80
either through the use of effects or layering two different timbres together that
are spread to the left and right speakers. Again, this makes them sound great in
isolation but when placed into a mix they can overpower it quickly, resulting
in a wall of incomprehensible sound. To avoid this, you should always look
towards programming sounds in mono unless it is a particularly important part
of the music and requires the width (such as a lead instrument). Even then,
the bass kick drum and snare, while forming an essential part of the music,
should be in mono as these will most commonly sit dead centre of the mix so
that the energy is shared by both speakers. Naturally, if you’re using a sampled
drum loop then they will most probably be in stereo but this shouldn’t be the
cause of any great concern. Provided that the source of the sample (whether
it’s a record or sample CD) has been programmed and mixed competently,
the instruments will have been positioned effi ciently with the kick located in
the centre and perhaps the hi-hats and other percussive instruments spread
thoughtfully across the image. This will, however, often dictate the positioning
and frequencies of other elements in the mix.
PROGRAMMING THEORY
Fundamentally, there are two ways to programme synthesizers. You can use
careful analysis to reconstruct the sound you have in mind or alternatively you
can simply tweak the parameters of a preset patch to see what you come up
with. It’s important to keep in mind that both these options are viable ways of
working and despite the upturned noses from some musicians at the thought
of using presets, a proportion of professional dance musicians do use them.
The Chemical Brothers have used unaltered presets from the Roland JP8080
and JV1080; David Morales has used presets from the JV2080, the Korg Z-1,
E-mu Orbit and Planet Phat; Paul Oakenfold has used presets from the
Novation SuperNova, E-mu Orbit and the Xtreme lead; and Sasha, Andy Gray,
Matt Darey and well, just about every trance musician on the planet has used
presets from the Access Virus. At the end of the day, if the preset fi ts into your
music then you should feel free to use it.
When it comes to synth programming, it’s generally recommended that those
with no or little prior experience begin by using simple editing procedures on
the presets so that you can become accustomed to not only the effects each
has on a sound, but also the character of the synth you’re using. Despite the
manufacturer’s bumf that their synth is capable of producing any sound,
they each exhibit a different character and you have to accept that there will
be some sounds that cannot be constructed unless you have the correct synth.
Alternatively, those with a little more experience can begin stripping all the
modulation away and building on the foundation of the oscillators.
Remember that a proportionate amount of instruments within a dance mix
share many similarities. For example, a bass sound is just that – a bass sound.
As we’ll cover a little later, many specifi c instruments are programmed roughly
in the same manner, using the same oscillators, and it’s the actual synthesizer
used, along with just the modulation from the envelope generators (EGs),
Programming Theory
CHAPTER 4 81
LFOs and fi lters that produces the different tones. Thus, it can often be easier to
simply strip all the modulation away from the preset patch so you’re left with
just the oscillators and then build on this foundation.
CUSTOMIZING SOUNDS
Unless you’ve decided to build the entire track around a preset or have a fortu-
nate coincidence, presets will benefi t from some editing. From earlier chapters,
the effects that each controller will impart on a sound should be quite clear but
for those new to editing presets it can be diffi cult knowing the best place to start.
First and foremost, the motif/pad/drums etc. should be playing into the synthe-
sizer or sampler in its entirety. A common mistake is to just keep banging away at
middle C on a controller keyboard to audition the sounds. While this may give
you an idea of the timbre, remember that velocity and pitch can often change
the timbre in weird and wonderful ways and you could miss a great motif and
sound combination if you were constantly hitting just one key. What’s more, the
controls for the EGs, LFOs, fi lters and effects (if employed) will adjust not only
the sound but the general mood of the motif. For example, a motif constructed
of 1/16th notes will sound very different when the amplifi er s release is altered. If
it’s shortened, the motif will become more ‘stabby ’, while if it’s lengthened it will
ow together more. Alternatively, if you lengthen the amplifi er ’s attack, the tim-
ing of the notes may appear to shift which could add more drive to the track.
Also consider that the LFOs will restart their cycle as you audition each patch,
but if they’re not set to restart at key press, they will sound entirely different
on a motif than a single-key strike. Plus, if fi lter key follow is being used, the
lter’s action will also change depending on the pitch of the note. Perhaps
most important of all, though, both your hands are free to experiment with the
controls, allowing you to adjust two parameters simultaneously such as fi lter
and resonance or amp attack and release.
Once you’ve come across the timbre that shares some similarities as the tonality
as you require for the mix, most users tend to go for the fi lter/resonance com-
bination. However, it’s generally best to start by tweaking the amplifi er envelope
to shape the overall volume of the sound before you begin editing the colour of
the timbre. This is because the amp EG will have a signifi cant effect on the MIDI
motif and the shape of the sound over time so it’s benefi cial to get this ‘right’
before you begin adjusting the timbre. We’ve already covered the implications the
amp EG has on a sound but here we can look at it in a more practical context.
The initial transient of a note is by far the most important aspect of any sound.
The fi rst few moments of the attack stage provide the listener with a huge
amount of information and can make the difference between a timbre sound-
ing clear-cut, defi ned and up-front or more atmospheric and sat in the back-
ground. This is because we perceive timbres with an immediate attack to be
louder than those with a slower attack and short stabby sounds to be louder
than those with a quick attack but long release. Thus, if a timbre seems too
sloppy when played back from MIDI reducing the attack and/or release stage
PART 1
Technology and Theory
82
will make it appear much more defi ned. That said, a common mistake is to
shorten the attack and release stage of every instrument to make all the instru-
ments appear distinct but this should be avoided.
Dance music relies heavily on contrast and not all timbres should start and/or
stop immediately. You need to think in terms of not only tone but also time.
By using a fast attack and release on some instruments, employing a fast attack
and slow release on others and using a slow attack and fast release on other
will create a mix that gels together more than if all the sounds used the same
envelope settings.
Naturally, what amp EG settings to use on each instrument will depend entirely
on the mix in question but very roughly speaking, trance, big beat and techno
often benefi t from the basses, kicks, snares and percussion having a fast attack
and release stage with the rest of the instruments sharing a mix of slow attack/
fast release and a fast attack/slow release. This latter setting is particularly
important in attaining the ‘hand in the air ’ trance leads.
Conversely, house, drum ‘n’ bass, lo-fi and chill out/ambient benefi t from
longer release settings on the basses almost to the point that the notes are all
connected together. Sounds that sit on top of this then often employ a short
attack and release to add contrast. Examples of this behaviour can be heard
in the latest stem of house releases that utilize the funky guitar riffs. The bass
(and often a phased pad sat in the background using a quick attack and slow
release) almost fl ow together while the funk guitars and vocals that sit on top
are kept short and stabby.
This leaves the decay and sustain of the amp EG. Sustain can be viewed as the
body of the sound after the initial pluck of the timbre has ended, while the decay
controls the amount of ‘pluck’. By decreasing the decay the pluck will become
stabby and prominent and increasing it will create a more drawn-out pluck.
What’s more, by adjusting the shape of the decay stage from the usual linear fash-
ion to a convex or concave structure the sound can take on a more ‘thwacking
sucking feel or a more rounded transient, respectively. Indeed, a popular sound
design technique is to use a lengthy decay (and attack) and reduce the release
and sustain parameters to zero. This creates the initial transient of the sound,
which is then mixed with a different timbre with only the release and sustain.
Once the overall shape of the riff has been modifi ed to add contrast to the
music, the fi lter envelopes and fi lter cut-off/resonance can be used to modify
the tonal content of the timbre. As the fi lter’s envelope will react to the current
lter settings, it’s benefi cial to adjust these fi rst. Most timbres in synthesizers
will utilize a low-pass fi lter with a 12 dB transition as these produce the most
musically useful results; however, it is worthwhile experimenting with the other
lter types on offer such as band-pass and high-pass. For instance, by keeping
the timbre on a low-pass fi lter but double tracking the MIDI to another syn-
thesizer and using a high-pass or band-pass to remove the low-frequency ele-
ments you’re left with just the fi zzy overtones. This can then be mixed with the
Programming Theory
CHAPTER 4 83
original timbre to produce a sound that’s much richer in harmonic content. If
this is then sampled, the sampler’s fi lters can be used to craft the sound further.
Also, keep in mind that the fi lter’s transition will have a large effect on the tim-
bre. As mentioned most synthesizers will utilize a 12 dB transition but a 24 dB
transition used on some timbres of the mix will help to introduce some con-
trast as the slope is much sharper. A typical use for this is if two fi lter sweeps
are occurring simultaneously, by setting one to 12 dB and the other to 24 dB
the difference in the harmonic movement can produce tonally interesting result
and add some contrast to the music. Alternatively, by double tracking an MIDI
le and using a 12 and 24 dB on the same timbre the two transitions will inter-
act creating a more complex tone that warps and shifts in harmonic content.
With the tone correct, you can move onto the fi lter’s EG. These work on the
same principle as the amp’s EG but rather than control volume, they control
the fi lter’s action over time. This is why it’s important to modify the amp’s
envelope before any other parameters since if the amp’s attack is set at its
fastest and the fi lter’s attack is set quite long the timbre may reach the amp’s
release portion before the fi lter has fully been introduced.
While mentioning the attack and decay of the amplifi er envelope we touched
upon the pluck being determined by the decay rate but the fi lter’s attack and
decay also play a part in this. If the attack is set to zero, the fi lter will act upon
the sound on key press, but if it’s set slightly longer than the attack on the amp
EG, the fi lter will sweep into the note creating a quack at the beginning of the
note. Similarly, as the decay setting determines how quickly the fi lter falls to its
sustaining rate, by shortening this you can introduce a harder/faster plucking
character to the note. Alternatively, with longer progressive pad sounds, if the
lter’s attack is set so that it’s the same length as the amp EGs attack and decay
rates, the sound will sweep in to the sustain stage whereby the decay and sus-
tain of the fi lter begin creating harmonic movement in the sustain.
However, keep in mind that while any form of sonic movement in a sound
makes it much more interesting, it’s the initial transient that’s the most import-
ant as it provides the listener with a huge amount of information about the
timbre. Thus, before tweaking the release and sustain parameters, concentrate
on getting the transient of the sound right fi rst.
Of course, the fi lter’s envelope doesn’t always provide the best results and the
synthesizer’s modulation matrix can often produce better results. For instance
by using a sawtooth LFO set to modulate the fi lter’s cut-off the harmonic con-
tent will rise sharply but fall slowly and the speed at which all this takes place
will be governed by the LFO’s rate. In fact, the LFO is one of the most under-
rated yet important aspect of any synthesizer as it introduces movement within
a timbre which is the real key behind producing great results.
Static sounds will bore the ear very quickly and make even the most complex
motifs appear tedious, so it’s practical to experiment by changing the LFOs
waveforms and its destinations within the patch. Notably, if an LFO is used,
PART 1
Technology and Theory
84
remember that its rate is a vial aspect. As the tempo of dance is of paramount
importance it’s sensible to sync the timing of any modulation to the tempo.
Without this not only can an arrangement become messy but in many tracks
the LFO speeds up with the tempo and this can only be accomplished by sync-
ing the LFO to the sequencer’s clock.
PROGRAMMING BASICS
While customizing presets to fi t into a mix can be rewarding it’s only useful if
the synth happens to have a timbre similar to that you’re looking for, and if
not, you’re have to programme from the ground up. Before going any further,
though, there are a few things that should be made clear.
Firstly, while you may listen to a record and wonder how they programmed
that sound or groove it’s important to note that the artist may not have actu-
ally programmed them and they may be straight off a sample CD, or in some
instances another record. If you want to know how Stardust created the groove
on ‘Music Sounds Better with You ’, how David Morales programmed ‘Needin’
You ’ or Phats and Small managed to produce ‘Turn Around ’, you should fi rst
listen to the original music they sampled to produce them. In many cases it
will become extremely clear how they managed to inject such a good groove or
sound – they sampled from previous hits or funk records.
What follows is a list of the most popular sounds and grooves that have actu-
ally been derived from other records. It’s by no means exhaustive as to cata-
logue them all would require approximately 30 pages but instead it covers the
most popular and well-established grooves.
Dance Artist Title of Track Original Artist Original Title of Track
Stardust Music Sounds Better
With you
Chaka Khan Fate
Armand Van Helden You Don’t Know Me Carrie Lucas Dance With You
David Morales Needin ’ You Rare Pleasure
The Chi-lites
Let Me Down Easy
My First Mistake
Daft Punk Digital Love George Duke I Love You More
Phats and Small Turn Around Tony Lee
Change
Reach Up
The Glow Of Love
Cassius 1999 Donna Summer If It Hurts Just A Little
The Bucketheads The Bomb Chicago Street Player
Programming Theory
CHAPTER 4 85
Dance Artist Title of Track Original Artist Original Title of Track
DJ Angel Funk Music Salsoul
Orchestra
Take Some Time Out For
Love
Woman of the Ghetto
Blueboy Remember Me Marlena Shaw
Full Intention Everybody Loves the
Sunshine
Roy Ayers Everybody Loves the
Sunshine
Todd Terry Keep on Jumpin Lisa Marie Ex
Musique
Keep on Jumpin
Keep on Jumpin
Byron Stingly Come On Get Up
Everybody
Sylvester Dance (Disco Heat)
Soulsearcher I Can’t Get Enough Gary’s Gang Let’s Lovedance Tonight
GU I Need GU Sylvester I Need You
Peppermint Jam
Allstars
Check it Out MFSB TSOP (The Sound of
Philadelphia)
Nuyorican Soul Runaway Salsoul
Orchestra
Runaway
The Bucketheads I Wanna Know Atmosfear Motivation
Michael Lange
Brothers and Sisters Bob James Westchester Lady
Basement Jaxx Red Alert Locksmith Far Beyond
Deaf ‘n’ Dumb Crew Tonite Michael
Jackson
Off the Wall
Moodyman I Can’t Kick this Feeling
When it Hits
Chic I Want Your Love
Cassius Feeling for You Gwen McCrae All This Love I’m Givin
Spiller Batucada Sergio Mendes Batucada
Eddie Amadour House Music Exodus Together Forever
DJ Modjo Lady, Hear Me Tonight Chic Soup for One
Spiller Groovejet Carol Williams Love Is You
Prodigy Out Of Space Max Romeo
And the
Upsetters
Chase The Devil
Moby Natural Blues Vera Hall Troubled So Hard
PART 1
Technology and Theory
86
Dance Artist Title of Track Original Artist Original Title of Track
FatBoy Slim Praise You Camille
Yarbrough
Take Yo Praise
Dee-Lite Groove Is In The Heart Herbie Hancock Bring Down The Birds
Massive Attack Be Thankful William
De-Vaughn
Be Thankful For What
You’ve Got
Massive Attack Safe From Harm Billy Cobham Stratus
Eminem My Name is Labbi Siffre I Got The
De La Soul 3 Is the Magic Number Bob Dorough The Magic Number
The Notorious BIG Mo Money Mo Problems Diana Ross I’m Coming Out
A Tribe Called Quest Bonita Applebum Carly Simon Why
De La Soul Say No Go Hall & Oats
Neneh Cherry
I Can’t Go For That
Buddy X
Groove Armada At The River Patti Page Old Cape Cod
Dream Warriors My Defi nition Of A
Bombastic Jazz Style
Quincy Jones
Soul Bossa Nova
Gang-Star Love Slick Young Hot
Unlimited
Ain’t There Something
Money Can’t Buy
Secondly, there is no ‘quick fi x ’ to creating great timbres. It comes from prac-
tical experience and plenty of experimentation with not only the synthesizer
parameters but also effects and processors. The world’s leading sound design-
ers and dance artists didn’t simply read a book and begin designing fantastic
sounds a day later; they learnt the basics and then spent months and, in many
cases, years learning from practical experience and taking the time to study
what each synthesizer, effect, processor and sampler can and cannot accom-
plish. Thus, if you want to programme great timbres, patience and experimen-
tation is the real key. You need to set aside time from writing music and begin
to learn exactly what your chosen synthesizer is capable of and how to manip-
ulate it further using effects or processors.
Finally, and most important of all there is no such thing as a ‘hit’ sound. While
some genres of music are created from using a particular type of sound and it
can be fun emulating the timbres from other popular tracks, using them will not
instantly make your own music an instant hit. Instead it will often make it look
like an imitation of a great record and will be subsequently judged alongside
it rather than on its own merits. Despite the promise of many books offering
Programming Theory
CHAPTER 4 87
the secret advice to writing hit sounds or music, there is no secret formula;
there are no special synthesizers, no hit making effects and no magic timbres.
What’s more, copying a timbre exactly from a previous hit dance track isn’t
going to make your music any better as dance fl oor tastes change very quickly.
As a case in point, the ‘pizz’ timbre was bypassed by every dance musician on
the planet as an unworthy sound until Faithless soaked the Roland JD-990’s
pizz’ in reverb and used it for their massive club hit ‘Insomnia’. Following this,
a host of ‘pizz’ saturated tracks appeared on the scene and it became so popu-
lar that it now appears on just about every dance-based module around. But as
tastes have changed, the timbre has now almost vanished into obscurity and is
skipped past by most musicians.
Keep in mind that Joe Public, your potential listening audience, is fi ckle, insensi-
tive and short of attention span, and while some timbres may be doing the
rounds today, next week/month they could be completely different. As a result,
the following is not devoted to how to programme a precise timbre from a spe-
cifi c track as it would most likely date the book before I’ve even completed this
chapter. Instead, it will concentrate on building the basic sounds that are syn-
onymous with dance music and as such will create a number of related ‘presets’
that you can then manipulate further and experiment with.
Creativity may be tiring and diffi cult at times but the rewards are certainly
worth it
PROGRAMMING TIMBRES
As previously touched upon, despite the claims from the synthesizer’s manu-
facturer just one synthesizer is not capable of producing every type of sound
suited toward every particular genre. In fact, in many instances you will have
to accept that it is only possible to create some timbres on certain instruments
no matter how good at programming you may be. Just as you wouldn’t expect
different models of speakers/monitors to sound exactly alike the same is true
of synthesis.
Although the oscillators, fi lters and modulation are all based on the same prin-
ciples, they will all sound very different. This is why some synthesizers are said to
have certain character and many older analogue keyboards demand such a high
price on the second-hand market. Musicians are willing to pay for the particular
sound characteristics of a synthesizer. As a result, although you may follow the
sound design guidelines in this chapter to the letter there is no guarantee that
they will produce exactly the same sound. Consequently, before you even begin
programming your own timbres, the fi rst step to understanding programming is
to learn your chosen synthesizers inside out.
Indeed, it’s absolutely crucial that you set time aside to experiment to learn
the character of the synthesizer’s oscillators by mixing a sine with a square, a
square with a saw, a sine and a saw, a triangle and a saw, a sine and a triangle
PART 1
Technology and Theory
88
and so forth, and noting the results it has on the overall timbre. This is the
only way you’ll be able to progress toward creating your own sounds short of
manipulating presets.
You have to know the characteristics of the synthesizers you use and how to
exploit its idiosyncrasies to create sounds. All professional artists have pur-
chased one synthesizer and learnt it inside out before purchasing another. This
is obviously more diffi cult today with the multitude of virtual instruments
appearing for ridiculously low prices but despite how tempting it may be to
own every synthesizer on the market you need to limit yourself to a few choice
favourites. You’ll never have the chance to programme timbres if you have to
learn the characteristics of 50 different virtual instruments.
Gerald’s advice at the beginning of this chapter is a very wise counsel indeed
and you have to accept that there are no shortcuts to creating great dance
music, short of ripping from sample CDs.
PROGRAMMING PADS
Although pads are not usually the most important timbre in dance music, we’ll
look at these fi rst. This is simply because an understanding of how they’re cre-
ated will help increase your knowledge of how the various processes of LFO
and envelopes can all work together to produce evolving, interesting sounds.
Of course, there are no predetermined pads to use within dance music produc-
tion and therefore there are no defi nitive methods to create them. But, while
it’s impossible to suggest ways of creating a pad to suit a particular style of
music, there are, as ever, some guidelines that you can follow.
Firstly, pads in dance music are employed to provide one of the following three
things:
To supply or enhance the atmosphere in the music – especially the case
with chill out/ambient music.
To ll a ‘hole’ in the mix between the groove of the music and lead or vocals.
To be used as a lead itself.
Depending on which of these functions the pad is to provide will also deter-
mine how it should be programmed. Although many of the sounds in dance will
utilize an immediate attack stage on the amp and fi lter’s EG so that the sound
starts immediately, it is only really necessary to use them on pads if the sound
is providing the lead of the track. As discussed, we determine timbres that start
abruptly to be perceivably louder than those that do not but more interestingly
we also tend to perceive sounds with a slow attack stage to be ‘less important
to the mix, even though in reality this may not be the case at all. As a conse-
quence, when pad sounds are used as ‘backing’ instruments, they should not
start abruptly but fi lter in or start slowly, while if they’re used as leads, the attack
stage should be quite abrupt as this not only helps it cut through the mix but
also gives the impression that it’s an important aspect of the music.
Programming Theory
CHAPTER 4 89
A popular, if somewhat clich é and overused, technique to demonstrate this
is the gated pad. Possibly the best example of this in use (which incidentally
doesn’t sound clich é) is from Sasha’s club hit Xpander which to many clubbers
is still viewed as one of the greatest trance tracks of all time. A single evolv-
ing, shifting pad is played as one long progressive note throughout the track
and a noise gate is employed to rhythmically cut the pad. When the noise
gate releases and lets the sound back through, a plucked lead is dropped in to
accentuate the return of the pad. The gated pad effect can be constructed in one
of the following three ways:
A series of MIDI notes are sent to a (MIDI compatible) gate to determine
where the pad should cut off. The length of the ‘gate’ is determined by
the length of the MIDI note.
A short percussive sound is inserted into the key input of the gate, so at
every kick the gate is activated. The length of the gate is determined by
the hold and release parameters on the gate.
A series of CC11 (expression) commands are sent to the synthesizer cre-
ating the pad which successively closes and opens the expression to pro-
duce a gated sound. The fi rst CC11 command is set at zero to turn the
pad off, while this is followed a few ticks later by another CC11 com-
mand set to 127 to switch it back on again. The length of the gate is con-
trolled by the distance between the CC11 off and on commands in the
sequence.
Naturally, for this to work in a musical context the pad must evolve throughout
because if this technique was used on a sustaining sound with no movement
you may as well just retrigger the timbre for its attack stage whenever required
rather than gate it. In fact, it’s this movement that’s the real secret behind creat-
ing a good pad. If the timbre remains static throughout without any timbral
variations then the ear soon becomes bored and switches off. This is the main
reason why analogue synths are often recommended for the creation of pads
since the random fl uctuations of the oscillators pitch and timbre provide an
extra sense of movement that when augmented with LFOs and/or envelopes
produces a sound that has constant movement that you can’t help but be
attracted too. There are various ways you can employ this movement ranging
from LFOs to using envelopes to gradually increase or decrease the harmonic
content while it plays. To better explain this, we’ll look at the methodology
behind how the envelopes are used to create the beginnings of a good pad.
By setting a fast attack and the decay quite long on an amplifi er envelope we
can determine that the sound will take a fi nite amount of time to reach the
sustain portion. Provided that this sustain portion is set just below the amp’s
decay stage, it will decay slowly to sustain and then continually ‘loop’ until the
MIDI note is released whereby it’ll progress onto the release stage. This creates
the basic premise of any pad or string timbre – it continues on and on until
the key is released. Assuming that the pad has a rich harmonic structure, move-
ment can be added by gradually increasing a low-pass fi lter’s cut-off while the
PART 1
Technology and Theory
90
pad is playing – there will be a gradual rise in the amount of harmonics con-
tained within the pad.
If a positive fi lter envelope was employed to control the fi lter’s action and the
envelope amount was set to fully modulate the fi lter, by using a long attack,
short decay, low sustain and fast release, the fi lter’s action would be introduced
slowly before going through a fast decay stage and moving onto the sustain.
This would create an effect whereby the fi lter would slowly open through the
course of the amplifi er’s attack, decay and sustain stage before the fi lter entered
a short decay stage during the ‘middle’ of the amp’s sustain stage. Conversely,
by using the same fi lter envelope settings but applied negative the envelope
is inverted creating an effect of sweeping downwards rather than upwards.
Nevertheless, the amp and fi lter envelopes working in different time scales
create a pad that evolves in harmonic content over a period of time. Notably,
this is a one-way confi guration because if the function of these envelopes were
reversed (in that the fi lter begins immediately but the amplifi er’s attack was set
long) the fi lter would have little effect since there would be nothing to fi lter
until the pad is introduced.
This leads onto the subject of producing timbres with a high harmonic content
and this is accomplished by mixing a series of oscillators together along with
detuning and modulating. The main principle here is to create a sound that
features plenty of harmonics for the fi lters to sweep, so this means that saw, tri-
angle, noise and square waves produce the best results although on some occa-
sions a sine wave can be used to add some bottom end presence if required. A
good starting point for any pad is to use two saw, triangle or pulse wave oscil-
lators with one detuned from the other by 3 or 5 cents. This introduces a
slight phasing effect between the oscillators helping to widen the timbre and
make it more interesting to the ear. To further emphasize this detuning, a saw,
triangle, sine or noise waveform LFO set to gently and slowly modulate the
pitch or volume of one of the oscillators will produce a sound with a more
analogue feel while also preventing the basic timbre from appearing too static.
If the pad is being used as a lead or has to fi ll in a large ‘hole’ in the mix then
it’s worthwhile adding a third oscillator and detuning this by 3 or 5 to
make the timbre more substantial. The choice of waveform for the third oscil-
lator depends on the waveform of the original two but in general it should be a
different waveform. For instance, if two saws are used to create the basic patch,
adding a third detuned triangle wave will introduce a sparkling effect to the
sound while replacing this triangle with a square wave would in effect make the
timbre exhibit a hollow character. These oscillators then, combined with the
previously discussed envelopes for both the fi lter and the amp, form the foun-
dation of every pad sound, and from here, it’s up to the designer to change the
envelope settings, modulation routings and the waveform used for the LFOs to
create a pad sound to suit the track. What follows is a guide to how most of the
pads used in dance music are created but it is by no means the defi nitive list
and it’s always experimenting to produce different variations.
Programming Theory
CHAPTER 4 91
Rise and Fall Pad
To construct this pad, use two sawtooth oscillators with one waveform detuned
by 3 cents. Apply a fast attack, short decay, medium sustain and a long release
for the amp envelope and employ a low-pass fi lter with the cut-off and
resonance set quite low. This should result in a static buzzing timbre. From
this, set the fi lter’s envelope to a long attack and decay but use a short release
and no sustain and set the fi lter envelope to maximum positive modulation.
Finally use the fi lter’s key follow so that it tracks the pitch of the notes being
played. This results in the fi lter sweeping up through the pad before slowly set-
tling down. If the pad is to continue playing during the sustain portion for a
long period of time then it’s also worth modulating the pitch of one of the
oscillators with a triangle LFO and modulating the fi lters cut-off or resonance
with a square wave LFO. Both these should be set to a medium depth and a
slow rate.
RESONANT PADS
Resonant pads can be created by mixing a triangle and square wave together and
detuning one of the oscillators from the other by 5 cents. Similar to the previous
pad, the amp’s attack should be set to zero with a short decay, medium sus-
tain and long release, but set the fi lter’s envelope to a long attack, sustain and
release with a short decay. Using a low-pass fi lter set the cut-off quite low but set
the resonance to around 3/4 so that the timbre appears quite resonant. Finally,
modulate the pitch of the triangle oscillator with a sine wave LFO set to a slow
rate and a medium depth and use the fi lters key follow. The LFO modulation
creates a pad that exhibits the natural analogue ‘character’ while the fi lter tracks
the pitch and sweeps in through the attack and decay of the pad and then sus-
tains itself through the amp’s sustain. Again, if the pad’s sustain is going to con-
tinue for a length of time it’s worthwhile employing a sine, pulse or triangle
wave LFO to modulate the fi lter’s cut-off to help maintain interest.
SWIRLING PADS
The swirling pad is typical of some of Daft Punk’s work and consists of two
sawtooth oscillators detuned from one another by 5 cents. A square LFO is
often applied to one of the saws to gently modulate the pitch while a third
oscillator set to a triangle wave is pitched approximately 6 semitones above
the two saws to add a ‘glistening’ tone to the sound. The amp envelope uses a
medium attack, sustain and release with a short decay while the fi lter envelope
uses a fast attack and release with a long decay and medium sustain. A low-
pass fi lter is used to modify the tonal content and this is set to a low-cut with
the resonance set to midway. Finally, chorus and then phaser or fl anger effects
are applied to the timbre to produce a swirling effect. It’s important that the
anger or phaser is inserted after the chorus rather than before so that it modu-
lates the chorus effect as well.
PART 1
Technology and Theory
92
THIN PADS
All of the pads we’ve covered so far are quite heavy, and in some instances, you
may need a lighter pad to sit in the background. For this, pulse oscillators are
possibly the best to use since they do not contain as many harmonics as saws
or triangles and you can use an LFO to modulate the pulse width to add some
interest to the timbre. These types of pads simply consist of one pulse oscil-
lator that uses a medium attack, sustain and release with a fast decay on the
amp envelope. A low-pass or high-pass fi lter is used, depending on how deep
or bright you want the sound to appear and there is usually little need for a fi l-
ter envelope since gently modulating the pulse width with a sine, sawtooth or
noise waveform produces all the movement required. If you decide that it does
need more movement, however, a very slow triangle LFO set to modulate the
lter cut-off or resonance set to a medium depth will usually produce enough
movement to maintain some interest but try not to get too carried away. The
purpose of this pad is to sit in the background and too much movement may
push it to the front of the mix resulting in all the instruments fi ghting for their
own place in the mix.
The data CD contains audio examples of these timbres being programmed.
PROGRAMMING DRUMS
The majority of drum timbres used in the creation of dance chiefl y originated
from four machines that are now out of production – the Roland TR909, the
Roland TR808, the Simmons SDS-5 and the E-mu Drumulator. Consequently,
while these machines (or to use their proper term drum synthesizers) are
often seen as requisites for producing most genres of dance music they’re very
highly sought after and demand an absurd sum of money on the second-hand
market, if you can fi nd them. Because of this, most musicians will use soft-
ware or hardware emulations, and although, to the author’s knowledge, there
are no alternatives to the Simmons SDS-5 and the E-mu Drumulator, both
the TR machines are available from numerous software and hardware
manufacturers. Indeed, due to the importance of using these kits when pro-
ducing dance, most keyboards and tone modules today will feature the
requisite TR808 and 909 kits and there are plenty of software plug-ins and stand-
alone sequencers that offer them. The most prominent of these is
Propellerhead’s ReBirth which imitates both TR machines and a couple of
TB303s (more on these later), but Propellerhead’s Reason, Cakewalk’s Fruity-
loops and D-lusion’s Drum station all offer samples/synthesis of the original
machines too.
Programming Theory
CHAPTER 4 93
KICK DRUMS
In the majority of cases, the Roland TR909 kick drum is the most frequently
used kick in dance music but the way it is created and edited with the synth
parameters and effects can sometimes determine the genre of music it suits the
most. In the original machines this kick was created using a sine wave with
an EG used to control its pitch. To add more of a transient to this sine wave, a
pulse and a noise waveform were combined together and fi ltered to produce an
initial click. This was then combined with the sine wave to produce the typical
909 kick sound. Notably, due to the age of these machines, there was not a
huge amount of physical control over the timbre they created but by building
your own kick you can play with a larger number of parameters once the basic
elements are down. In fact, this applies to all drum sounds, not just the kick,
and is much better than sampling them from other records or sample CDs
since you can’t alter the really important parameters.
When constructing a kick in a synthesizer, the frequency of the sine wave will
determine how deep the kick becomes, and while anywhere between 30 and
100 Hz will produce a good kick, it does depend on how deep you want it
to be. A sine wave at 30 –60 Hz can be used to create an incredibly deep bowel
moving thud typical of hip-hop, a frequency of 50 –80 Hz can provide a start-
ing block for lo-fi and a frequency of 70 –100 Hz can form the beginning of a
typical club kick. An attack/decay EG can then be used to modulate the pitch of
the sine wave. These envelopes are most common on drum machines but are
also available on some synthesizers such as the Roland JP8080 and a number
of soft synthesizers. Similarly, this action can be recreated on any synthesizer
by dropping both sustain and release parameters to zero.
Naturally, the depth of the pitch modulation needs to be set to maximum so
that the effect of the pitch envelope can be heard and it should be set to a posi-
tive depth so that the pitch moves downwards and not upwards (as it would if
the pitch envelope were negative). Additionally, the attack parameter should be
set as fast as possible so that the pitch modulation begins the instant the key
is struck. If your synthesizer doesn’t have access to a pitch envelope, then the
same effect can be produced by pushing the resonance up to maximum so that
the fi lter begins to self-oscillate. The fi lter’s envelope will then act in a manner
similar to a pitch envelope, so the attack, sustain and release will need to be set
at zero and the decay can be used to control the kick’s decay.
Once this initial timbre is created it’s prudent to synthesize a clicking timbre
to place over the transient of the sine wave. This can help to give the kick more
presence and help it to pull through a mix. Possibly the best way to accomplish
this is to use a square wave pitched down and use a very fast amplifi er attack
and decay setting to produce a short sharp click. The amount that this wave is
pitched down will depend entirely on the sound you want to produce, so it’s
sensible to layer it over the top of the sine wave and then pitch it up or down
PART 1
Technology and Theory
94
until the transient of the kick sounds right for the mix. This, however, is only to
acquire a basic kick sound, and it’s now open to tweaking with all the param-
eters you have at your disposal, which incidentally are many more than the
humble 909 or 808 have to offer.
Firstly, as we’ve already touched upon the transient where we attain most
of the information about a sound, adjusting the fi lter cut-off and resonance
of the square wave will dramatically change the entire character of the kick.
A high-resonance setting will produce a kick with a more analogue character,
while increasing the amplifi er’s decay will produce a kick that sounds quite
boxy’. Increasing the fi lter’s cut-off will result in a more natural sounding
kick, while increasing the pulse width will create a more open, hollow timbre.
Additionally, the oscillator producing the sine wave, can also be affected with
the synthesis parameters on offer. Using the pitch and pitch envelope param-
eters you can adjust how the pitch reacts on the sine wave, but more impor-
tantly, you can determine how ‘boomy ’ the kick is through the pitch decay.
In this context, this will set the time that it takes to drop from the maximum
pitch change to the sine wave’s normal pitch. Thus, by increasing this, the pitch
of the sine wave doesn’t fall as quickly, permitting the timbre to continue for
longer creating a ‘boomy ’ feel. Similarly, decreasing it will shorten its length
making it appear snappier. More interestingly, though, if you can adjust the
properties of the envelope’s decay slope you can use it to produce kicks that are
fatter or have a smacking/slapping texture.
If the decay’s slope remains linear, the sound will die in a linear fashion pro-
ducing the characteristic 909 kick sound. However, if a convex slope is used
in its place, as the pitch decays it will ‘bow ’ the pitch at a number of fre-
quencies, which results in a kick that’s more ‘rounded’ and much fatter. On
the other hand, if the slope is concave, the pitch will curve ‘inwards ’ during
the decay period producing the sucking/smacking timbre similar to the E-mu
Drumulator. By increasing the length of the pitch decay further these effects
can be drawn out producing kicks that are suited towards all genres of dance.
It should be noted here, however, that not all synthesizers allow you to edit
the envelope’s slope in such a manner but some software samplers (such as
Steinberg’s HALion) will allow you to modify the slope of a sample. Thus, if
you were to sample a kick with a lengthy decay, you could then modulate the
various stages of the envelope.
Alternatively, if the synthesizer is quite substantial it may allow you to modu-
late not only the sine wave but certain aspects of the envelope with itself.
Fundamentally this means that the pitch envelope modulates not only the sine
wave but also its own parameters. Using this you can set the synthesizer’s mod-
ulation destination to affect the oscillator’s pitch and its own decay parameter
by a negative or positive amount, which results in the decay becoming convex
or concave, respectively. This effect is often referred to as ‘recursive ’ modula-
tion, but as mentioned, this is only possible on the more adept synthesizers.
Nevertheless, whether recursive modulation is available or not, the key, at this
Programming Theory
CHAPTER 4 95
stage, is to experiment with different variations of pitch decay, the fi lter sec-
tion on the square wave and both oscillators. For instance, replacing the sine
wave with a square produces a ‘clunky’ sound, while replacing it with a triangle
produces a sound similar to the Simmons SDS-5.
While these methods will produce a kick that can be modifi ed to suit all genres,
for hip-hop you can sometimes glean better results using a self-oscillating
lter and a noise gate. If you push the resonance up until it breaks into self-
oscillation it will produce a pure sine wave. This is often purer than the oscilla-
tors themselves and you can use this to produce a deep tone that’s suitable for
use as a kick in the track (usually 40 Hz). If you then programme a 4/4 loop in
a MIDI sequencer and feed the results into a noise gate it can be used as an EG.
While playing the loop, lower the threshold so that only the peaks of the wave
are breaching, set the attack to zero and use the release to control the ‘kicks’
delay. This kick can be modifi ed further by adjusting the hold time, as this will
allow more of the peak through before entering the release stage.
Once the basic kick element is down it will most likely benefi t from some com-
pression but the settings to use will depend on the genre of music. Typically,
house, techno and trance will benefi t from the compressor using a fast attack
so that the transient is crushed by the compression. This produces the emblem-
atic club kick while setting the compressor so that the attack misses the tran-
sient but grips the decay stage allowing it to be raised in gain to produce the
characteristic hip-hop, big beat or drum ‘n’ bass timbre.
SNARE DRUMS
The snare drum in most dance music is derived (somewhat unsurprisingly)
from the TR909, or, in the case of house, the E-mu Drumulator or the Roland
SDS-5. All these, however, were synthesized in much the same way by using a
triangle oscillator mixed in with pink or white noise that was treated to posi-
tive pitch movements. This can, of course, be emulated in any synthesizer by
selecting a triangle wave for the fi rst oscillator and using either pink or white
noise for the second. The choice between whether to use pink or white noise
depends on the overall effect you wish to achieve but, by and large, pink noise
is used for house, lo-fi and ambient snares while white is used for drum ‘n’
bass, techno, garage, trance and big beat. This is simply because pink noise
contains more low-frequency content and energy than white and hence
produces a thicker, wider sounding timbre.
To produce the initial snare sound much of the low-frequency content will
need to be removed, so it’s sensible to employ a high-pass, band-pass fi lter
or notch fi lter depending on the type of sound you require. Notching out the
middle frequencies will create a clean snare sound that’s commonly used in
breakbeat while a band pass will add crispness to the timbre making it suitable
for techno. Alternatively, using a high-pass fi lter with a medium resonance set-
ting will create the house ‘thunk’ timbre. As with the kick drum, snares need
to start immediately on key press and remain fairly short, so the amp’s EG will
PART 1
Technology and Theory
96
need setting to a zero attack, sustain and release while the decay can be used to
control the length of the snare itself. In some cases, if it’s possible in the syn-
thesizer, it’s prudent to employ a different amp EG for both the noise and the
triangle wave. This way the triangle wave can be kept quite short and swift by
using a fast decay while the noise can be made to ring a little further by increas-
ing its decay parameter. The more this is increased, the more atmospheric snare
will appear allowing you to move from the typical techno snare, through big
beat and trance before fi nally arriving at ambient. What’s more, this approach
may also allow you to use a convex envelope on the noise to produce a smack-
ing timbre similar to the Nine Inch Nails (NIN) ‘Closer’ snare.
If two amp EGs are not available, small amounts of reverb can help to lengthen
the timbre, and if this is followed with a noise gate with a fast attack and short
hold time, the decay can be used to control the amount of ambience in the
loop. Even if artifi cial ambience isn’t required, employing a gate can be particu-
larly important when programming house, drum ‘n’ bass and hip-hop loops as
the snare is often cut short in these genres to produce a more dynamic loop.
Additionally, for drum ‘n’ bass the snare can then be pitched further up the
keyboard to produce the characteristic bright ‘snap’.
This initial snare can be further modifi ed using a pitch envelope to modulate
both oscillators and can be applied either positive or negative depending on
the genre of music. Most usually, small amounts of positive pitch modulation
are applied to force the pitch downwards as it plays but some house tracks will
employ a negative envelope to create a snare that exhibits a ‘sucking’ nature
resulting in a thwacking sound. If you decide to use this technique, how-
ever, it’s often worth removing the transient of the snare in a wave editor and
replacing it with one that uses positive pitch modulation.
The two combined then produce a sound that has a good solid strike but
decays upwards in pitch at the end of the hit. If this isn’t possible in the syn-
thesizer/wave editor then a viable alternative is to sweep the pitch from low to
high with a sawtooth or sine LFO (provided that the saw starts low and moves
high) set to a fast rate or programme a series of control change (CC) messages
to sweep it from the sequencer. Once this is accomplished, small amounts of
compression set so that only the decay is squashed (i.e. slow attack) will help
to bring it up in volume so that it doesn’t disappear into the rest of the mix.
HI-HATS
Hi-hats can be synthesized in a number of ways depending on the type of
sound you require but in the ‘original’ drum machines they were created with
nothing more than fi ltered white noise. This can be accomplished in most syn-
thesizers by selecting white noise as an oscillator and setting the fi lter envelope
to a fast attack, sustain and release with a medium-to-short decay. Finally, set
the fi lter to a high pass and use it roll off any frequencies that are too low to
create a high hat. The length of the decay parameter will determine whether
the hi-hat is open or closed (open hats have a longer decay period).
Programming Theory
CHAPTER 4 97
While this is the best way to produce a typical analogue hi-hat sound it can
sound rather cheap and nasty on some synthesizers, and even if it produces the
timbre, it can still appear quite dreary. As a result, a much better approach is to
use either ring or FM as this produces a hi-hat that sounds sparkling and ani-
mated, helping to add some energy to the music. Ring modulation is possibly
the easiest solution of the two simply consisting of modulating a high-pitched
triangle wave with a lower pitched triangle. FM consists of modulating a
square or sine wave with a high-pitched triangle oscillator. The result is a high-
frequency noise waveform that can then be modifi ed with a volume envelope
set to a zero attack, sustain and release with a short-to-medium decay. If FM
is used and the modulator source is modifi ed with a pitch envelope and the
amount of FM is increased or reduced, the resulting waveform can take on
more interesting properties, so it’s worthwhile experimenting with both these
parameters (if available). Once this basic timbre is constructed, shortening the
decay creates a closed hi-hat while lengthening it will produce an open hat.
Similarly, it’s also worth experimenting by changing the decay slope to convex
or concave to produce fatter or thinner sounding hats.
Notably, unlike most percussive instruments, compression should not be used
on hi-hats as all but the best compressors will reduce the higher frequencies
even if the attack is set so that it skips by the attack stage. This obviously results
in a dull sounding top end of a mix, so any form of dynamic restriction should
be avoided on high-frequency sounds, which include shakers, cymbals, cow-
bells, claves and claps.
SHAKERS
Shakers are constructed in a fashion similar to the high hats. That is, they
are created from white noise with a short attack, sustain and release with a
medium decay on the amplifi er and fi lter envelope. A high-pass fi lter is then
used to remove any low-end artefacts to produce a timbre consisting entirely
of higher frequencies. Once you have the basic ‘hi-hat’ timbre, the decay can be
lengthened to produce a longer sound, which is then treated to an LFO modu-
lating the high-pass fi lter to produce some movement. The waveform, rate and
depth of the LFO depend entirely on the overall sound you want to produce,
but as a general starting point, a sine wave LFO with a fast rate and medium
depth produces the typical ‘shaker’ timbre. Again once this initial timbre has
been constructed, like all drum sounds, it’s worth experimenting by changing
the envelope’s attack and decay slope from linear to convex or concave.
CYMBALS
Again, cymbals are created in a fashion similar to hi-hats and shakers as they’re
constructed from noise, but rather than use the noise from an oscillator, it’s
commonly generated from ring or FM. Using two square waves played high on
the keyboard, detune them so that they’re approximately two octaves apart and
PART 1
Technology and Theory
98
set the amp’s EG to a fast attack with no release or sustain and a medium decay
(in fact similar to hi-hats and shakers). Once the tone is shaped, both need to
be fed into a ring or cross modulator to produce the typical analogue cymbal
noise timbre.
The attack of the cymbal is particularly important, so it may be worth synthe-
sizing a transient to drop over the top or using a fi lter envelope to produce
more of an initial crash. The fi lter envelope is set to a fast attack, sustain and
release with a medium-to-short decay as this produces an initial ‘hit’ but you
can synthesize an additional transient using an oscillator that produces pink
noise with the same fi lter settings as previously mentioned. If ring modulation
is not available on the synthesizer, a similar sound can be created using FM.
This consists of modulating a high-pitched square wave with another, lower
pitched, square or a triangle wave. As with the FM used to create high hats,
by modifying the source with pitch modulation or increasing/decreasing the
amount of FM you can create a number of different crash cymbals.
CLAPS
Claps are perhaps the most diffi cult ‘percussive ’ element to synthesize because
they consist of a large number of ‘snaps’ all played in a rapid, sometimes pitch
shifting, sequence. Although in the interests of theory we’ll look at how they’re
created, generally speaking you’re much better off recording yourself clapping
(remember to run the mic through a compressor fi rst, though, as the transient
will often clip a recorder) and treating them to a chorus or harmonizing effect
or, simpler still, sampling some from a CD or record.
Generally, claps are created from white noise passed through a high-pass fi lter.
The fi lter and amp EGs (as should be obvious by now) are set to a fast attack
with no release or sustain and a decay set to suit the timbre you require
midway is a good starting point. The fi lter cut-off, however, often benefi ts from
being augmented by a sawtooth LFO set to a very fast rate and maximum depth
to produce a ‘snapping’ type timbre. This produces the basic tone but you’ll also
need to use an arpeggiator to create the successive snaps to follow. For this, pro-
gramme a series of staccato notes into an MIDI sequencer and use these to trig-
ger the arpeggiator set to one octave or less so that it constantly repeats the same
notes in a fast succession. These can be pitched downwards, if required, using
a pitch envelope set to a positive depth on the oscillator but it must be set so
that it doesn’t retrigger on each individual note otherwise the timbre turns into
mushy pitch-shifted clutter. Although claps are diffi cult to synthesize it is often
worth the effort required, however, as adjusting the fi lter section and/or the decay
slopes of the amp and fi lter’s EG opens up a whole new realm of clap timbres.
COWBELLS
Fundamentally, cowbells are quite easy to synthesize and can be constructed in
a number of ways. You can use two square oscillators or a triangle and a square
Programming Theory
CHAPTER 4 99
depending on the sound you require. If you require a sound with more body
then it’s best to use two square waves, but if you prefer a brighter sound then a
square mixed with a triangle will produce better results.
For a cowbell with more body, set two oscillators to a square wave and detune
them so that the fi rst square plays at C 5 (554 Hz) while the other plays at G 5
(830 Hz). Follow this by setting the amp envelope to a fast attack with no release
or sustain and a very short decay. This resulting tone is then fed into a band-pass
lter which can be used to shape the overall colour of the sound. Alternatively, if
you want to create a cowbell that exhibits a brighter colour to sit near the top of
a mix it’s preferable to use just one square wave mixed with a triangle wave. The
frequency of the square should be set around 550 Hz and the triangle should be
detuned so that it sits anywhere from half an octave to an octave from the square,
depending on the timbre you require. Both these are then ring modulated and
the result is fed through a high-pass fi lter allowing you to remove the lower fre-
quencies introduced by the ring modulation. Once these basic timbres are cre-
ated, the amp’s EG can be lengthened or shortened to suit the current rhythm.
CONGAS
Congas are constructed from two oscillators with a dash of FM to produce a
clunky’ timbre. These can be easily constructed in any synth (that features FM)
by setting the fi rst oscillator to a sine wave and the second to any noise wave-
form. The sine wave amp’s EG is set to a very fast attack and decay with no
release or sustain to produce a click which is then used as FM for the noise
waveform. The noise waveforms amp EG needs to be set to a fast attack with
no release or sustain and the decay set to taste.
There is little need to use a fi lter on the resulting sound, but if it seems to have
too much bottom or top end then use a low-pass or high-pass fi lter to remove
the upper or lower frequencies consecutively. In fact, by employing a high-pass
lter and reducing it slowly you can create muted congas while adjusting the
noise amp’s decay slope to convex or concave can produce the typical slapped
congas that are sometimes used in-house.
TAMBOURINES
Tambourines, like claps, are diffi cult to synthesize as they essentially consist of
a number of hi-hats with a short decay each occurring one after the other in a
rapid succession which is passed through a band-pass fi lter. This means that
they can initially be constructed by using white noise, FM or ring modulation
in the same manner as high hats. Once this basic timbre is down, the tone is
augmented with a sawtooth LFO set to a fast rate and full depth. After this,
you’ll need to programme a series of staccato notes to trigger the synthesizer’s
arpeggiator to create a series of successive hits. The decay parameter of the
amp’s EG can then be used to manipulate the tambourines, character, while
the results are fed into a band-pass fi lter which can be used to shape the type of
PART 1
Technology and Theory
100
tambourine being used. Typically, wide band-pass settings will recreate a tam-
bourine with a large tympanic membrane and thinner settings will recreate a
tambourine with a smaller membrane.
TOMS
Toms can be synthesized in one of the two ways depending on how deep and
wide you want the tom drum to appear. Typically, they’re synthesized by using
the same methods as producing a kick drum but utilize a higher pitch with a
longer decay on the amp’s EG and some white noise mixed in to produce
some ambience. The settings for the white noise oscillator generally remain the
same as the amp’s EG for the sine wave (zero attack, release and sustain with a
medium decay). Alternatively, they can be produced by creating a snare timbre
by mixing a triangle wave with a noise waveform but modulating the noise wave-
form with pitch so that it falls while the triangle wave continues unchanged.
PERCUSSION GUIDELINES
Although we’ve looked at the percussive instruments used throughout all dance
genres, if you decide to become more creative, or simply want to experiment
further with producing percussive hits there are some general guidelines that
you can follow.
The main oscillator usually always consists of a sine or triangle wave with its
pitch modulated by a positive pitch envelope. This creates the initial tone of
the timbre while the second oscillator is used to create either the subsequent
resonances of the skin after it’s been hit or alternatively the initial transient.
For the resonance, white or pink noise is commonly used, while to create the
transient, a square wave is often used. The amp and fi lter envelope of the fi rst
oscillator is nearly always set to a zero attack, zero release and medium decay.
This is so that the sound starts immediately on key press (the drummer’s strike)
while the decay controls how ambient the surrounding room is. If the decay is
set quite long the sound will obviously take longer to decay away, producing
an effect similar to reverb on most instruments. That said, if the decay is set
too long on low-pitched percussive elements such as a kick it may result in a
whooping’ sound rather than a solid hit. If the second oscillator is being used
to create the subsequent resonance to the fi rst oscillator then the amp and fi l-
ter settings are the same as the fi rst oscillator whereas if it’s being used to cre-
ate the transient, the same attack, release and sustain settings are used but the
decay is generally much shorter.
For more creative applications, it’s worthwhile experimenting with the slope
of the amp and fi lter’s EG decay and occasionally attack. Also, by experiment-
ing with frequency modulation and ring modulation it’s possible to create a
host of new drum timbres. For instance, if the second oscillator is producing
a noise waveform, this can be used to modulate the main oscillator to reduce
the overall tone of the sound. What’s more, by using the fi lters the sound can
Programming Theory
CHAPTER 4 101
be shaped to fi t into the current loop. For example, using a high-pass fi lter you
can remove the ‘boom’ from a kick drum which, as a consequence, produces a
tighter and more punchy kick. The key is to experiment.
The data CD contains audio examples of these timbres being programmed.
PROGRAMMING BASS
Synthesizer basses are a diffi cult instrument to encapsulate in terms of program-
ming as we have no real expectations of how they should sound and they always
sound different when placed into a mix anyway. As such, there are no defi nitive
ways to construct a bass as pretty much anything goes provided that it fi ts with
the music. Of course, as always, there are some guidelines that apply to all basses
and many genres also tend to use a similar bass timbre, so here we’ll concentrate
on how to construct these along with some of the basic guidelines.
Generally speaking, most synthesizer bass sounds are quite simple in design
as their main function is to supply some underpinning because it’s the lead/
vocals that provide the main focal point. As a result, they are not particularly
complex to programme and you can make some astounding bass timbres using
just one or two oscillators. Indeed, the big secret to producing great basses is
not from the oscillators but from the fi lters and a thoughtful implementation
of modulation to create some movement.
Whenever approaching a bass sound it’s wise to have the bass riff programmed
and playing back in your sequencer along with the kick drum and any prece-
dence instruments. By doing so, it’s much easier to hear if the bass is inter-
fering with the kick and other priority instruments while you manipulate the
parameters. For example, if the bass sound seems to disappear into the track
or has no defi nite starting point, making it appear ‘woolly ’, then you’ll need to
work with both the amp and the fi lter envelopes to provide a more prominent
attack. Typically, the amplifi er’s attack should be set to its shortest time so that
the note starts immediately on key press and the decay should be set so that
it acts as a release setting (sustain and release are rarely used in bass timbres).
This is also common with the fi lter’s envelope. Setting the fi lter’s attack stage
too long will result in the fi lter slowly fading in over the length of the note,
which can destroy the attack of the bass.
Notably, bass timbres also tend to have fairly complex attack, so if it needs a
more prominent attack it’s prudent to sometimes layer an initial pluck over the
transient. This pluck can be created by synthesis in the same manner as creat-
ing a pluck for a drum’s kick, but more commonly, percussion sounds such as
cowbells and wood blocks pitched down are used to add to the transient. If
this latter approach is used then it’s advisable to reduce the length of the drum
PART 1
Technology and Theory
102
sample to less than 30 ms. This is because, as we touched upon when discussing
Granular synthesis, we fi nd it diffi cult to perceive individual sounds if they are
less than 30 ms in length. This can usually be accomplished by reducing the
amp’s decay (remember that drum samples have no sustain or release) with
the benefi t that you can use amplitude envelopes to fade out the attack tran-
sient oscillator as you fade in the main body of the bass, helping to keep the
two sounds from getting in each other’s way. If this is not possible then simply
reducing the volume of the drum timbre until it merges with the bass timbre
may provide suffi cient results.
Another important aspect of creating a good bass is sonic movement. No mat-
ter how energetic the bass riff may be in MIDI, if it’s playing a simple tone
with absolutely no movement our ears can get bored very quickly and we tend
to ‘turn off ’ from the music. In fact, this lack of movement is one of the main
reasons why some grooves just don’t seem to groove at all. If it’s a boring tim-
bre, the groove will appear just as monotonous. This movement can be imple-
mented in a number of ways. Firstly, as touched upon in the genre chapters,
programming CC messages or using velocity commands can breathe life into
a bass provided, of course, that you programme the synthesizer to accept these
controllers. Secondly, it’s often worthwhile assigning the modulation wheel to
control the fi lter cut-off, resonance, LFO rate or pitch. This way, after program-
ming a timbre you can move the wheel to introduce further sonic movement
and record these as CC data into the sequencer. And fi nally, you can employ
LFO modulation to the timbre itself to introduce pitch or fi lter movement.
If, on the other hand, you decide to use a real bass guitar then unless you’re
already experienced it isn’t recommended attempting to programme the tim-
bre in a synthesizer or record one live. Unlike synthetic timbres we know
how a real bass guitar should sound. If these are constructed in a synthesizer
they often sound too synthetic while recording a bass guitar reasonably well
requires plenty of experience and the right equipment. Thus, if you feel that the
track would benefi t from a real bass it is much easier to invest in a sample CD
or alternatively Spectrasonics Trilogy – a VST instrument that contains multi-
samples of a huge number of basses, real and synthetic.
DEEP HEAVY BASS
The heavy bass is typical of drum ‘n’ bass tracks (by artists such as Photek) and
is the simplest bass timbre to produce as it’s essentially a kick drum timbre
with the amplifi er’s decay and release parameter lengthened. This means that
you’ll need to use a single oscillator set to a sine wave with its pitch positively
modulated by an attack decay envelope. On top of this it may also be worth
synthesizing a short stab to place over the transient of the sine wave. As with
the kick, the best way to accomplish this is to use a square wave pitched down
and use a very fast amplifi er attack and decay with no release or sustain. Once
this initial timbre is laid down, you can increase the sine wave’s amp EG decay
and sustain until you have the sound you require.
Programming Theory
CHAPTER 4 103
SUB-BASS
Following on from the large bass, another alternative for drum ‘n’ bass is the
earth shaking, speaker melting sub-bass. Essentially, these are formed from a sin-
gle sine wave with perhaps a small ‘clunk’ positioned at the transient to help it
pull through a mix. The best results come from a self-oscillating fi lter using any
oscillator, but if the fi lter will not resonate, a sine wave from any synth should
provide good results. Obviously, the amplifi er’s attack stage should be set at zero
so that the sound begins the moment the key is depressed, but the decay setting
to use will depend entirely on the sound you require and the current bass motif
(a good starting point is to use a fairly short decay, with no sustain or release).
If the sine wave has been produced by an oscillator then the fi lter cut-off will
have no effect as there are no harmonics to remove, but if it’s been created with a
self-oscillating fi lter, reducing the fi lter’s cut-off will remove the high-end artefacts
that may be present. Also it’s prudent to set the fi lter’s key follow, if available, to
positive so that the further up the keyboard you play the more it will open. This
will help to add some movement to the sound. If a click is required at the begin-
ning of the note then, as with the drums, a square wave with a fast amplifi er
attack and decay and a high cut-off setting can be dropped onto the transient
of the sine wave. Alternatively setting the fi lter’s envelope to a zero attack, sus-
tain and release along with a very quick decay and increasing the amount of the
lter’s EG until you have the transient can also provide great results.
This will produce the basic ‘preset’ tone typical of a deep sub-bass but is of
course open to tweaking the initial click with fi lter cut-off and resonance along
with modulating the sine wave to add some movement. For example, by modu-
lating the sine wave’s pitch by 2 cents with an EG set to a slow attack, medium
decay and no sustain or release the note will bend slightly every time it’s played.
If there is no EG available, then a sine wave LFO with a slow rate and set to
restart at key press will produce much the same results. That said you will have
to adjust the rate by ear so that the tone pitches properly. As a variation of this
sliding bass, rather than modulate the pitch, and provided that the sine wave
has been created with a self-oscillating fi lter, it’s worthwhile sliding the fi lter.
This can be accomplished by setting the fi lter envelope to a fast attack with
no sustain or release and a halfway setting on the decay parameter. By then
increasing the positive depth of the envelope to fi lter you can control how
much the bass slides during playback.
On top of this, keep in mind that changing the attack, decay and release of the
amp or/and fi lter EG from linear to convex or concave will also create new vari-
ations. For example, by setting the decay to a concave slope will create a ‘pluck-
ing’ timbre while setting it to convex will produce one that’s more rounded.
Similarly, small amounts of controlled distortion or very light fl anging can also
add movement. A more creative approach, though, is to use a vocal intona-
tion programme with experimental settings. The more extreme these settings
are, the more the pitch will be adjusted, but with some experimentation it can
introduce interesting fl uctuations.
PART 1
Technology and Theory
104
MOOG BASS
The Minimoog was one of the proprietary instruments in creating basses for
dance music and has been used in its various guises throughout big beat, trance,
hip-hop and house. Again this synthesizer is out of production and although
it’s unlikely that you’ll fi nd one on the second-hand market as their highly
prized possessions, if you do they’ll demand an extraordinarily high price.
Nevertheless, this type of timbre can be constructed on most analogue-style
synthesizers (emulated or real) from either using a sine or triangle wave as the
initial oscillator depending on whether you want a clean rounded sound (sine)
or a more gritty timbre (triangle). On top of this, add a square wave and detune
it from the main oscillator by either or 3 and set the amplifi er’s envelope
for both oscillators to its fastest attack, a medium decay, no sustain and no
release. The square wave helps to ‘thicken’ the sound and give it a more woody
character while the subsequent amplifi er setting creates a timbre that starts on
key press. The decay setting acts as the release parameter for the timbre. The fi l-
ter envelope to use will depend entirely on how you want the sound to appear,
but a good starting point is to set the low-pass fi lter cut-off to medium with a
low resonance and use a fast attack with a medium decay. By then lengthen-
ing the attack or shortening the decay of the fi lter’s envelope you can create a
timbre that ‘plucks’ or ‘growls . Depending on the type of sound you require
it may also benefi t from fi lter key follow, so the higher up the keyboard you
play the more the fi lter opens. If you decide not to employ this, however, it is
worth modulating the pitch of one, or both, of the oscillators with an LFO set
to a sine wave running at slow rate and make sure that this restarts with every
key press. This will create the basic Moog bass patch but it should be noted
that the original Moog synthesizer employed convex slopes on the envelopes,
so if you want to emulate it exactly you will have to curve the attack, decay and
release parameters.
TB303
1: Acid House Bass
The acid house bass was a popular choice during the late 1980s but has been
making something of a comeback in-house, techno, drum ‘n’ bass and chill
out/ambient. Fundamentally, it was fi rst created using the Roland TB303 Bass
Synthesizer which, like the accompanying TR909 and TR808, is now out of
production and, as such, demands a huge price on the second-hand market.
Similar to the 909 and 808, however, there are software emulations available,
with the most notable being Propellerhead’s ReBirth which includes two along
with the TR808 and 909. As usual, though, this timbre can be recreated in any
analogue synthesizer (emulated or real) but it should be noted that due to the
parameters offered by the original synthesizer, there are thousands of permu-
tations available. As a result, we’ll just concentrate on creating the two most
popular sounds and you can adjust the parameters of these basic patches to
suit your own music.
Programming Theory
CHAPTER 4 105
The acid house ( ‘donk’) bass can be created using either a square or sawtooth
oscillator depending on whether you want it to sound ‘raspy ’ (saw) or more
woody ’ and rounded (square). As with most bass timbres the sound should
start immediately on key press it requires the amp’s attack to be set to its fast-
est position but the decay can be set to suit the type of sound you require (as a
starting point try a medium decay and no sustain or release). Using a low-pass
lter set the cut-off quite low and then slowly increase the resonance so that it
sits just below self-oscillation. This will create a quite bright ( ‘donky’) sound
that can be further modelled using the fi lter’s envelope.
As with the amp settings, the fi lter envelope should be set so that it fi ts your
music, but as a general starting point, a zero attack, sustain and release with
a decay that’s slightly longer than the amp’s EG decay will produce the typical
house bass timbre. Filter key follow is often employed in these sounds to cre-
ate movement but it’s also prudent to modulate the fi lter’s cut-off with velocity
so that the harder the key is struck the more it opens. Generally speaking, by
adopting this approach you’ll have to tune the subsequent harmonics into the
key of the song but this is not always necessary. In fact, many dance and drum
n’ bass artists have used this timbre but made it deliberately dissonant to the
music to make it more interesting.
2: Resonant Bass
The resonant bass is similar in some respects to the tone produced by the acid bass
but has a much more resonant character that almost squeals. This, however, is a lit-
tle more diffi cult to accomplish on many synths as it relies heavily on the quality of
the fi lters and ideally these should be modelled around analogue. Start with a saw-
tooth oscillator and set the amplifi er’s EG to zero attack and sustain with a medium
release and a fast decay. This sets the timbre to start immediately on key press and
then quickly jump from a short decay into the release portion, in effect, producing
a bass with a quick pluck. Follow this by setting the fi lter’s envelope to a zero attack
and sustain with a short release and a short decay (both shorter than the amp’s set-
tings), and set the low-pass fi lter’s cut-off and resonance to halfway. These settings
on the fi lter envelope introduce resonance to the decay’s ‘pluck’ at the beginning
of the note. This creates the basic timbre but it’s worth employing positive fi lter key
follow so that the fi lter’s action follows the pitch helping to maintain some interest
in the timbre. On the subject of pitch, modulating the sawtooth using a positive
or negative envelope set over a 2 semitone range can help to further enhance the
sound. Typically, if you want the timbre to ‘bow ’ and drop in pitch as it plays then
it’s best to use a positive envelope while if you want to create a timbre that exhibits
a ‘sucking’ motion with the pitch raising it’s best to use a negative envelope.
SWEEPING BASS (TYPICAL OF UK AND SPEED
GARAGE)
The sweeping bass is typical of UK garage and speed garage tracks and consists of
a tight yet deep bass that sweeps in pitch and/or frequencies. These are created
PART 1
Technology and Theory
106
with two oscillators, one set to a sine wave to add depth to the timbre while
the other is set to a sawtooth to introduce harmonics that can be swept with a
lter. These are commonly detuned from one another but the amount varies
depending on the type of timbre required. Hence, it’s worth experimenting by
rst setting them apart by 3 cents and increasing this gradually until the sound
becomes as thick as you need for the track.
As with all bass sounds, they should start the moment the key is depressed so
the amp EG’s attack is set to zero along with sustain but the release and decay
should initially be set midway. The decay setting provides the ‘pluck’ while the
release can be modifi ed to suit the motif being played from the sequencer. The
lter cut-off is set to a low-pass as you want to remove the higher harmon-
ics from the signal (opposed to removing the lower frequencies fi rst) and this,
along with the resonance, is adjusted so that they both sit approximately half-
way between fully exposed and fully closed. Ideally, the fi lter should be con-
trolled with a fi lter envelope using the same settings as the amp EG but to
increase the ‘pluck’ of the sound it’s benefi cial to adjust the attack and decay
so that they’re slightly longer than the amplifi er’s settings. Finally, positive fi l-
ter key follow should be employed so that the fi lter will track the pitch of the
notes being played which helps to add more movement.
These settings will produce the basic timbre but it will benefi t from pitch-shifting
and/or fi lter movements. The pitch shifting is accomplished, somewhat unsur-
prisingly, by modulating both oscillators with a pitch envelope set to a fast attack
and medium decay but, if possible, the pitch bend range should be limited to
2 semitones to prevent it from going too wild. If you decide to modulate the fi l-
ter then it’s best to use an LFO with a sawtooth that ramps upwards so that the
lter opens, rather than decays, as the note plays. The depth of the LFO can be set
to maximum so that it’s applied fully to the waveform and the rate should be set
so that it sweeps the note quickly. What’s more, if the notes are being played in
succession it’s prudent to set the LFO to retrigger on key press, otherwise it will
only sweep properly on the fi rst note and any successive notes will be treated
differently depending on where the LFO is in its current cycle.
TECHNO ‘ KICK ’ BASS
Although a bass is not always used in techno, if it is, it’s usually kept short and
sharp so as not to get in the way of the various rhythms. That said, as there are
few leads employed the bass is programmed so that it’s quite deep and power-
ful as (if they’re used) they play a major role in the music.
To create this type of bass requires four oscillators stacked together, all using
the same waveform. Typically sawtooth waveforms are used but square, trian-
gle and sine waves can also work equally well so long as the oscillators used
are all of the same waveform. One waveform is kept at its original pitch while
the other three are detuned from this and each other as far as possible with-
out sounding like individual timbres (i.e. less than 20 Hz). Obviously, the
sound needs to begin the moment the key is struck, so the resulting timbre
Programming Theory
CHAPTER 4 107
is sent to an amp EG with a zero attack along with a zero sustain and release
with a medium decay setting. Typically, a techno bass also exhibits a whump
at the decay stage, which can be introduced by modulating the pitch of all the
oscillators with an attack/decay envelope. Obviously this uses a fast attack so
that pitch begins at the start of the note but the decay setting should be set
just short of the decay used on the amp EG. Usually, the pitch modulation is
positive so that the ‘whump’ is created by moving the pitch downwards but
it’s worth experimenting by setting this to negative so that it sweeps upwards.
Additionally, if the synthesizer offers the option to adjust the slope of the enve-
lopes, a convex decay is used, but experimentation is the real key with this type
of bass and, in some cases, a concave envelope may produce more acceptable
results. Filter key follow is rarely used as the bass tends to remain at one key
but if your motif moves up or down in the range, it’s prudent to use a positive
key follow to introduce some movement into the riff.
TRANCE ‘ DIGITAL ’ BASS
This bass is typical of those used in many trance tracks, and while it doesn’t
exhibit a particularly powerful bottom end, it does provide enough of a bass
element without being too rich in harmonics so that it interferes with the
characteristic trance lead. It requires two oscillators, both set to square waves
and detuned from each other by 3 cents to produce the basic tone. A low-pass
lter is used with the cut-off set so that it’s almost closed and the resonance
is pushed up so that it sits just below self-oscillation. The sound, as always,
needs to start immediately on key press, so the amp’s attack is set to zero along
with both the release and the sustain. The decay should be set about midway
between being fully exposed and fully closed. The fi lter envelope emulates
these amp settings using a zero attack, sustain and release but the decay should
be set so that it’s slightly shorter than the amp’s decay so that it produces a
resonant pluck. Finally, the fi lter key follow is applied so that the fi lter follows
the pitch across the bass motif. Once constructed, if the bass is too resonant it
can be condensed by reducing the fi lter’s decay to make the ‘pluck’ tighter, or
alternatively, you can lower the resonance and increase the fi lter’s cut-off.
‘ POP ’ BASS
The pop bass is commonly used in many popular music tracks, hence the
name, but it is also useful for some house and trance mixes where you need
some ‘bottom-end’ presence but at the same time don’t want it to take up too
much of the frequencies available in the mix. These are easily created in most
synthesizers by using a sawtooth and a triangle oscillator, with the triangle
transposed up by an octave from the sawtooth. The amp envelope is set to an
on/off status whereby the attack, decay and release are all set to zero with the
sustain set just below maximum. This means that the sound almost immedi-
ately jumps into the sustain portion, which produces a constant bass tone for
as long as the key is depressed (remember that the sustain portion controls
PART 1
Technology and Theory
108
the volume level of sustain and not its length!). To add a ‘pluck’ to the sound,
a low-pass fi lter is usually set very low to begin with while the resonance is
increased as high as possible without forcing the fi lter into self-oscillation. The
lter envelope is then set to a zero attack, release and sustain but the decay is
set quite long so that it encompasses the sustain portion of the amp’s EG, in
effect producing a resonant pluck to the bass. By then increasing the depth of
the fi lter envelope along with increasing the fi lter’s cut-off and resonance you
can control how resonant the bass becomes. Depending on the motif that this
bass plays it may also be worth employing some fi lter key-tracking so that the
lter follows the pitch of the bass.
GENERIC BASS GUIDELINES
Although here we’ve looked at the main properties that contribute towards the
creation of bass timbres and covered how to construct the most commonly
used basses throughout the dance genres, occasionally simply creating these
sounds will not always produce the right results. Indeed, much of the time you
nd that the bass is too heavy without enough upper harmonics or that it’s too
light without the right amount of depth. In these cases it’s useful to use differ-
ent sonic elements from different synthesizers to construct a patch – a process
known as layering. Essentially this means that you construct a bass patch in
one synthesizer and then create one in another (and possibly another and so
forth) and then layer them altogether to produce a single patch.
It’s important to understand here, though, that these additional layers should not
be sourced from the same synthesizer and the fi lter settings for each consecutive
layer should be different. The reason behind this is due to the fact that all synthe-
sizers sound tonally different from one another, even if they use the same param-
eter settings. For instance, if an analogue (or analogue emulated) synthesizer is
used to create the initial patch, using an analogue emulation from another manu-
facturer to create another bass (or using a digital synthesizer) will create a timbre
with an entirely different character. If these are layered on top of one another and
the respective volumes from each are adjusted you can create a more substantial
bass timbre. Ideally, to prevent the subsequent mixed timbres from becoming
too overpowering in the mix it’s quite usual to also employ different fi lter types
on each synthesizer. For example, if the fi rst synthesizer is producing the low-fre-
quency energy but it lacks any top end, the second synthesizer should use a high-
pass fi lter. This allows you to remove the low-frequency elements from the second
synthesizer so that it’s less likely to interfere with the harmonics from the fi rst.
This form of layering is often essential in producing a bass with the right amount
of character and is one of the reasons why many professional artists and studios
will have a number of synthesizers at their disposal.
It’s also worth bearing in mind that it’s inadvisable to apply any stereo-
widening effects to bass timbres. This is because the bass should sit in the cen-
tre of the mix (for reasons we’ll touch upon in the mixing chapter) but it’s
sometimes worthwhile applying controlled distortion to them to increase the
Programming Theory
CHAPTER 4 109
harmonic content. As some basses are constructed from sine waves that con-
tain no harmonics, they can become lost in the mix but by applying distortion,
the upper harmonics introduced can help the bass cut through the mix. This
distortion can then be accurately controlled using fi lters or EQ to mould the
bass to the timbre you require. What’s more, some effects such as fl angers or
phasers require additional harmonics to make any noticeable difference to the
sound, something that can be accomplished by applying distortion before the
anger or phaser. Of course, if these effects are applied it’s sensible to ensure
that they’re applied in mono, not stereo to prevent the bass becoming spread
across the stereo image.
Finally, as with the drum timbres if you have access to a synthesizer that allows
you to adjust the linearity of the envelope’s attack and decay stage you should
certainly experiment with this. In fact, a convex or concave decay on the fi lter
and/or amp is commonly used to produce bass timbres with a harder pluck
allowing it to pull through a mix better than a linear envelope.
The data CD contains audio examples of these timbres being programmed.
PROGRAMMING LEADS
Saying that basses were diffi cult to encapsulate is only the tip of the sound
design iceberg since trying to defi ne what makes a good lead is impossible.
Every track will use a different lead sound ranging from Daft Punk’s distorted or
phased leads to the various plucks used by artists such as Paul Van Dyk through
to the hundreds of variations on the euphoric trance leads. Consequently, there
are no defi nitive methods to creating a lead timbre but it is important to take
plenty of time producing one that sounds right. The entire track rests on the
quality of the lead and it’s absolutely vital that this sounds precise. However,
while it’s impossible to suggest ways of creating new leads to suit any one par-
ticular track, there are some rough generalizations that can be applied.
Firstly, most lead instruments will utilize a fast attack on the amp and fi lter envelope
so that it starts immediately on key press with the fi lter introducing the harmonics
to help it to pull through a mix. The decay, sustain and release parameters, though,
will depend entirely on the type of lead and sound you want to accomplish.
For example, if the sound has a ‘pluck’ associated with it then the sustain
parameter, if used, will have to be set quite low on both amp and fi lter EGs so
that the decay parameter can drop down to it to create the pluck. Additionally,
the release parameter of the amp can be used to determine whether the notes
of the lead motif fl ow together, or are more staccato (keep in mind that stac-
cato notes appear louder than those that are drawn out). If the release is set so
that the notes fl ow together it isn’t unusual to employ portamento on the syn-
thesizer so that notes rise or fall into the successive ones.
PART 1
Technology and Theory
110
As the lead is the most prominent part of the track it’s usually sits in the
mid-range, and is often bursting with harmonics that occupy this area. The
best approach to accomplish this is to build a harmonically rich sound by
using sawtooth, square waves, triangle and noise waveforms stacked together
and make use of the unison feature (if the synthesizer has it available). This
is a form of stacking a number of the synthesizer’s voices together to produce
thicker and wider tones but it also reduces the polyphony available to the syn-
thesizer, so you have to exercise caution as to the polyphony available to the
synthesizer. Once a harmonically rich voice is created it can then be thinned,
if required, with the fi lters or EQ and modulated with the envelopes and LFOs.
These latter modulation options play a vital role in producing leads as they
need some sonic movement to maintain interest.
Typically, these methods alone will not always provide a lead sound that is
rich or deep enough; it’s worth employing a number of methods to make it
bigger’ such as layering, doubling, splitting, hocketing or residual synthesis.
We’ve already looked at some of the principles behind layering when looking
at basses but with leads this can be stretched further as there is little need to
keep the lead under any real control – its whole purpose is to sit above every
other element in the mix! Alongside layering the sounds in other synthesizers,
it’s often worth using different amp and/or fi lter envelopes in each synthesizer.
For example, one timbre could utilize a fast attack but a slow release or decay
parameter, while the second layered sound could utilize a slow attack and a
fast decay or release. When the two are layered together, the harmonic interac-
tion between the two sounds produces very complex timbres that can then be
mixed together in a desk and EQ’d or fi ltered externally to suit.
Doubling is similar to layering, but the two should not be confused as doubling
does not use any additional synthesizer but the one you’re using to programme
the sounds. This involves copying the MIDI information of the lead motif to
another track in the MIDI sequencer and transposing this up to produce a much
richer sound. Usually transposing this copy up by a fi fth or an octave will pro-
duce musically harmonious results, but it is worth experimenting by transpos-
ing it further and examining the effects it has on the sound. A variation on this
theme is to make a copy of the original lead and then only transpose some notes
of the copy rather than all of them so that some notes are accented.
Hocketing consists of sending the successive notes from the same musical
phrase to different synthesizers or patches within the same module to give
the impression of a complex lead. Usually, you determine which synthesizer
receives the resulting note through velocity by setting one synthesizer or patch
to not accept velocity values below, say, 64 while the second synthesizer or
patch will only accept velocity values above 64. If this is not possible, then sim-
ply copying the MIDI fi le onto different tracks and deleting notes that should
not be sent to the synthesizer will produce the same results.
Splitting and residual synthesis are the most diffi cult to implement but often
produce the best results. Fundamentally, splitting is similar in some respects to
Programming Theory
CHAPTER 4 111
layering but rather than produce the same timbre on two different synthesizers,
a timbre is broken into its individual components which are then sent to dif-
ferent synthesizers. For example, you may have a sound that’s constructed from
a sine, sawtooth and triangle wave but rather than have one synthesizer do this,
the sine may come from one synthesizer, the triangle from another and the saw
from yet another. These are all modulated in different ways using the respec-
tive synthesis engines, but by listening carefully to the overall sound through
a mixing desk, the sound is constructed and manipulated through the synthe-
sizer’s parameters and mixing desk as if it were from one synthesizer. Residual
synthesis, on the other hand, involves creating a sound in one synthesizer and
then using a band-pass or notch fi lter to remove some of the central harmonics
from the timbre. These are then replaced using a different synthesizer or syn-