NEXT CHAPTER (Chapter 6) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 -- APPENDIX 1 -- APPENDIX 2
By this point, you should be able to fend your way through the Csound manual discussions of additional unit generators and resources not covered in this tutorial, and, after some initial head-scratching, come to a working understanding of how to use these resources. From now on, our discussions will summarize major points and pitfalls, and will illustrate more by example than by verbiage. However, we will take a fairly detailed look at a few concepts, such as filter response curves, for which introductory literature is scarce.
This chapter deals primarily with filters - computer algorithms or hardware circuits that pass certain frequency bands while attenuating other bands. Among the most common types of filters are:
Two important (and often related) characteristics of many filters are
A low pass filter, for example, supplies increasing attenuation to frequencies above some point, until "total suppression" (generally defined as 60 dB attenuation) is reached. A very sharp low pass filter (such as the smoothing filters of high quality DACs and analog-to-digital converters) may produce total elimination of frequencies 1/3 of an octave above the cutoff point. In a low pass filter with a more gradual roll-off slope, on the other hand, the difference between the cutoff frequency and the frequency where total suppression is reached might be five octaves.
Filters also are the basic building blocks of reverberators. Filtering generally introduces a phase shift and also a time delay in the audio signal. Conversely, combining two or more delayed versions of a signal always results in some filtering (partial or complete elimination of some frequencies, and reinforcement of other frequencies). Later in this chapter we will consider the Csound reverberant filters comb and alpass, and their combination in unit generator nreverb.
[ See the discussion of TONE and ATONE in the Csound reference manual ]
tone and atone are, respectively, Csound implementations of first order digital low-pass and high-pass filters. The two arguments to each of these filters are
and
tone progressively attenuates all frequencies above 0 herz, while atone applies progressively greater attenuation to all frequencies below the Nyquist. At the half-power point, an input frequency component is reduced by 6 dB ( a reduction of 50 % of the original amplitude level) and the rolloff curve is 6 dB per octave -- not very sharp, as can be seen in the following table.
tone 0 Hz. H.P. 2*H.P. 3*H.P. 4*H.P. 8*H.P. 16*H.P. 32*H.P. atone Nyquist H.P. .5*H.P. 1/3*H.P. .25*H.P. 1/8*H.P. 1/16*H.P. 1/32*H.P. dB attenuation 0 -3 -6 -9 -12 -18 -24 -30 percent of original amplitude remaining 1. .707 .5 .355 .25 .125 .063 .032 output amplitude 32000 22624 16000 11360 8000 4000 2016 1024
Thus, if we assume an H.P. (half-power) point of 200 herz, tone will reduce a frequency of 400 herz (2 * H.P.) by 6 dB, or by about 50 % of its raw unfiltered amplitude level. A sine tone at 400 herz with an original amplitude of 32000 would have an amplitude of about 16000 after being run through this filter. A frequency of 6400 herz (32 * the 200 herz H.P. point) would be attenuated by 30 dB, to about .032 its original amplitude. If the input amplitude of this 6400 herz sine tone was 32000, the output amplitude would be about 1024.
Similarly, with a 1000 herz half power point, atone will attenuate a frequency of 333 herz (1/3 * H.P.) by 9 dB, so that only about 36 % of the amplitude inensity remains. An input sine tone with an amplitude of 32000 would have an output amplitude of around 11360 after being run through this filter.
Running a signal through tone or atone will always cause some loss in amplitude. The greater the effect of the filtering, the greater the amplitude reduction.
In the following example, a time-varying low-pass filter is applied to the digitized tamtam soundfile from the /sflib/x directory :
; ############################################################# ; soundfile ex5-1 : Tone Eastman Csound Tutorial ; ############################################################# sr=44100 kr=2205 ksmps=20 nchnls=1 ; mono soundfile input: ; p4 = soundin. number ; p5 = ampfac ; p6 = skip from front of soundfile ; p7 = exponential fade in time ; p8 = exponential fade out time ; for low pass filtering of soundfiles ; p9 = 1st half-power point, p10 = 2nd h.p. point ; p11 = rate of change between p9 & p10, p12= func. for change instr 39 isfnum = p4 idur = p3 a1 soundin isfnum,p6 ; amplitude envelope iampfac = (p5 =0?1:p5 ) ; fade-in & fade-out defaults & checks: ip7 = (p7 =0?.001:p7 ) ip8 = (p8 =0?.001:p8 ) ip7 = (p7 < 0 ? abs(p7) * idur : ip7) ip8 = (p8 < 0 ? abs(p8) * idur : ip8) a3 expseg .01,ip7 ,iampfac,idur-(ip7 +ip8 ),iampfac,ip8 ,.01 a1 = a1*a3 ; low pass filtering: irate = (p11=0? 1/p3 : p11) k1 oscili p10-p9,irate,p12 khp = k1 + p9 ; changing half-power point a1 tone a1, khp out a1 endin ----------------------------------------------------------- < score11 file used to create "ex5-1" : < functions for time-varying change in half power point: *f1 0 65 7 0 64 1; * f2 0 65 7 0 32 1. 32 0; * f3 0 65 5 .01 32 1. 32 .01; SF 0 0 4; < mono input, default mono output p3 3.5; du 303; p4 5; < soundin.# : soundin.5 points to /sflib/perc/tam p5 .9; < ampfac p6 .5; < duration skipped from front of sf p7 1.; < fade in time p8 1.; < fade out time < "tone" p-fields : p9 nu 100/ 2000/ 2000/200; < 1st half-power point p10 nu 3000/ 150 / 100/900; < 2nd " " " p11 nu 0 / 0 / .75/4.; < rate of change (0 = 1/p3) p12 nu 1 / 2 / 3 / / ; < function for change end; -----------------------------------------------------------
Appendix Csound score file examples : Chapter 5
Substituting atone for tone in this example would emphasize the higher frequencies of the tamtam spectrum. Since the rolloff curves of tone and atone are so gradual, these filters have the most effect on signals with a broad frequency spectrum that includes both low and high frequency components. The instrument algorithm above would have only marginal effect in reducing the brightness of crotales, or other sound sources with mostly high frequency energy.
For sharper filtering, Csound also provides second order Butterworth BUTTERLP and BUTTERHP low- and high-pass filters. The arguments to butterlp and butterhp are identical to the arguments to tone and atone, but the rolloff is much sharper and the passband much flatter, so the effects of the filtering often are more pronounced. Try substituting butterlp for tone in ex5-1. The choice of whether to use a first oder or second order Csound filter generally depends upon how much we wish to alter the timbral "brightness" of (or amount of high frequency energy within) a sound.
tone and atone are complementary filters. This means that in the following series of operations:
the resulting signal addem will be identical to the original white noise signal anoise. This can be very useful in varying the timbral "brightness" (and, often, the perceived "loudness" and "distance" as well) of sound sources:
asound diskin p4, p6, 0 alo tone asound p7 ahi atone asound p7 ibright = p8 abright = (ibright * ahi) + ((1. - ibright) * alo) aout balance abright, asound out aout - - and p-field 8 in our score11 input file: - - - p9 nu 0 / .5 / 1.; < timbral brightness: 0, least bright, to 1., brightestHere three soundfiles are read in by diskin (or, perhaps, the same source soundfile is read in three times). For the first soundfile, we take only the output of the low-pass filter (p9 = 0). For the second input sound, we mix an equal percentage of low- and high-pass filtered signals, equivalent to bypassing all filtering. And for the final output "note," we use only the high-pass variant. We also restore the amplitude lost in filter with the call to unit generator balance (discussed in the next section). By choosing a value for p7 carefully (often setting it to 2 or 3 times the fundamental frequency of the sound source), and perhaps by substituting butterlp and butterhp for tone and tone, we should be able to adjust the spectral brightness of the sounds to meet our msical needs.
To introduce a crescendo or diminuendo within a single sound, we would need to substitute a time varying filter envelope, perhaps created with expseg or with an oscillator, for the constant ibright in the example above, and perhaps create a complimentary amplitude envelope as well. Applying a periodic sinusoidal signal to the mixture of ahi and alo would create a tremolo effect.
[ See the discussion of RMS, GAIN and BALANCE in the Csound reference manual ]
As in the example above, applying an envelope to the half-power point of a low- or high-pass filter often has the effect of applying the same envelope to the amplitude level of a signal. Sometimes this is desired, but often it is not. A succession of sounds with similar input amplitudes may have widely varying output amplitudes after filtering, and thus not sound "balanced," due to varying amounts of energy loss caused by the filtering. The unit generators rms, gain, and, especially, balance are useful in such situations.
rms is an envelope follower. It's output (which must be at the k-rate) is a control signal that tracks, or "follows," the root-mean-squared (average) amplitude level of some audio signal. gain modifies the root-mean-squared amplitude of an audio signal, providing either attenuation or an increase, so that the audio signal matches (roughly) the level of a control signal. balance is a combination of rms and gain. It tracks the amplitude envelope of a control (or "comparison") audio signal (specified in the second argument), then modifies the sample values of another audio signal (given in the first argument) to match the average rms level of the control (comparator) signal.
In the following example, we have modified the end of the previous instrument to "balance" the filtered output signal level against that of the unfiltered original tamtam. The same score file is used. /P>
############################################################# ; soundfile ex5-2 : Tone & Balance (same score as ex5-1) ############################################################# instr 39 isfnum = p4 idur = p3 a1 soundin isfnum,p6 ; amplitude envelope iampfac = (p5 =0?1:p5 ) ; fade-in & fade-out defaults & checks: ip7 = (p7 =0?.001:p7 ) ip8 = (p8 =0?.001:p8 ) ip7 = (p7 < 0 ? abs(p7) * idur : ip7) ip8 = (p8 < 0 ? abs(p8) * idur : ip8) a3 expseg .01,ip7 ,iampfac,idur-(ip7 +ip8 ),iampfac,ip8 ,.01 a1 = a1*a3 ; low pass filtering: irate = (p11=0? 1/p3 : p11) k1 oscili p10-p9,irate,p12 khp = k1 + p9 ; changing half-power point afilt tone a1, khp a1 balance afilt , a1 out a1 ])
In this particular instance, the audible difference between ex5-1 and ex5-2 is not all that striking. We still hear the filter envelope, and when most the the higher frequencies are filtered out, we perceive a psychologically "softer" sound, even though the "balanced" amplitude level does not change very much. However, there are other situations in which balance is handy in restoring lost amplitude.
balance can also be a useful signal processor in its own right. In the instrument below, it is used to impose the envelope of one soundfile onto another soundfile:
; ############################################################# ; soundfile ex5-3 : Balance used as envelope follower ; ############################################################# sr=44100 kr=4410 ksmps=10 nchnls=1 ; p4 = soundin.# number of audio soundfile ; p5 = amplitude multiplier ; p6 = skip from front of audio soundfile ; p7 = optional exponential fade in time ; p8 = optional exponential fade out time ; p9 = soundin.# number of control soundfile ; p10 = skip time into control soundfile instr 1 audio soundin p4, p6 ; read in the audio soundfile aenv soundin p9 , p10 ; read in the control soundfile ; impose control soundfile envelope on audio file audio balance audio , aenv ; vary output amplitude from note to note : p5 = ( p5 = 0 ? 1. : p5 ) audio = audio * p5 ; optional fade-in & fade-out itest = p7 + p8 if itest = 0 goto done kfades expseg .01, p7, 1.,p3-(p7 + p8), 1., p8,.01 audio = audio * kfades done: out audio endin ----------------------------------------------------------- < score11 file used to create soundfile example "ex5-3" i1 0 0 8; < mono input, default mono output p3 nu .3 /// 3.2 / 1.5 //4/7. ; du nu 300.153/300.179/300.25/303./ 301.18 / 301.43 / 303.3/ 307.29; p4 nu 6 /7 /8 /5; < audio soundfiles from /sflib/perc : < 6 = plate1, 7 = gong.ef3 , 8 = cym1 , 5 = tam p5 nu .2 / .6 / .4 / .9 / .9 * 4; < output amplitude multiplier p6 nu 0/1./.4/2.; < duration skipped from front of sf p7 0; < optional added fade in time p8 0; < optional added fade out time < envelope follower p-fields : p9 nu 9 / 10 / 11 / 12 / < soundin # of control soundfiles 13 / 14 / 15 / 16; < mostly from /sflib/perc < 9 = tb1 , 10 = tb2 , 11 = wb , 12 = crt.fs6 < 13 = bongo1.roll , 14 = sleighbells , 15 = maracaroll , 16 = voicetest p10 0 ; < skip off front of control soundfile end; -----------------------------------------------------------
In this example, the amplitude envelopes of various soundfiles, mostly from the sflib/perc directory and specified in p9, are imposed upon other, more sustained percussive soundfiles, specified in p4.< P>
It is also possible, of course, to mix some of the control soundfile signal into the audio output. In soundfile example ex5-3-2, we have modified the output statement of our previous orchestra file, replacing the penultimate line
with this line :
In the score file, we have added a p-field that controls the mix balance between the audio and control soundfiles:
;#################################################################### ; soundfile ex5-3-2 : Mixture of audio and control soundfiles ;####################################################################
Note: The recently introduced follow unit generator can be used in place of balance to create a control signal from the amplitude variations of an audio signal, and provides the user with a few more hooks to perform this envelope extraction cleanly.
[ See the discussion of RESON and ARESON, and of BUTTERBP and BUTTERBR in the Csound reference manual ]
reson and butterbp are, respectively, first order and second order band-pass filters. The passband is a bell-shaped curve, with progressively greater amounts of attenuation applied to frequencies both above and below a center frequency resonance. The required arguments are :
The frequency response produced by reson is :
Thus, the smaller the bandwidth, the sharper the filtering, and the more sharply defined the resonance.center frequency | passband -3dB ---- bandwidth ---- -3dB -6dB ------- 2 * bandwidth ------- -6dB -9dB ------------ 3 * bandwidth ------------ -9dB -12dB ------------------- 4 * bandwidth ------------------ -12dB (and so on)
With a center frequency (Fc) of 1200 herz and a pass band of 200 herz:
butterbp, a second order alternative to reson, creates a flatter passband (less attenuation at frequencies of 1100 and 1300 herz in the example above) and a much sharper rolloff.
areson is a complementary band-reject ("notch") filter, which produces the sharpest filtering (roughly -60 dB) at the center frequency and progressively less attenuation on either side. The output of this filter -- a "V-shaped" notch -- is the inverse of the output of reson. butterbr is Csound's second order notch filter.
Band reject filters are used less frequently than low- , high- and band-pass filters. A few decades ago, when ground loops were a more common problem in sound reinforcement and recording applications, analog band reject filters with very narrow notch bands were sometimes used to reduce 60 herz and 120 herz (second harmonic) power supply hum. Today, band reject filters generally are used to notch out a portion of a complex frequency spectrum (e.g. a tam tam, or white noise), often creating a rather "hollow-sounding" output with a "hole in the middle." Note that a band pass filter with a center frequency of 0 herz is a low-pass filter, and a band pass filter centered at the Nyquist frequency is a high-pass filter.
The optional fourth argument to reson and areson, iscl, is an amplitude scaling factor with three possible values: 0, 1 or 2.
In general, a 1 is recommended for the iscl argument. If you want to restore lost amplitude after filtering, follow reson or areson with unit generator balance, as in ex5-6.
In the following example, both the center frequency and the bandwidth values remain constant during a note :
; ############################################################# ; soundfile ex5-4 : Reson filtering white noise Csound Tutorial ; ############################################################# sr = 44100 kr = 2205 ksmps = 20 nchnls = 1 ; p4 = center frequency ; p5 = bandwidth instr 1 kamp expseg 1, p6, 8000, p3-(p6+p7), 5000, p7, 1 anoise rand kamp ; white noise source audio signal aout reson anoise, p4, p5, 1 ; (note iscl scalar argument = 1) out aout endin ----------------------------------------------------------- < score for ex5-4 i1 0 0 5; p3 rh 4; p4 1000; < center frequency p5 nu 15/ 100/ 500/ 1500/ 5000; < bandwidth p6 .25; < attack time p7 .25; < decay time end; -----------------------------------------------------------
In the next example, both the center frequency and the bandwidth vary within each note :
; ############################################################# ; soundfile ex5-5 : center freq. & bandwidth vary during each note ; ############################################################# sr = 44100 kr = 2205 ksmps = 20 nchnls = 1 ; p4 = center frequency ; p5 = bandwidth instr 1 kamp expseg 1,p6,8000,p3-(p6+p7),5000,p7,1 anoise rand kamp kbw expseg p8, .5 * p3 , p9 , .5 * p3 , p10 ; bandwidth envelope p4 = (p4 < 15 ? cpspch(p4) : p4) p5 = (p5 < 15 ? cpspch(p5) : p5) kcf expon p4, p3, p5 ; center frequency a1 reson anoise, kcf , kbw * kcf , 1 out a1 endin ----------------------------------------------------------- < score11 file for ex5-5 i1 0 0 3; p3 4; du .95; p4 no a3/ d7/ c2; < center frequency 1(beginning) p5 no a5/ ef3/ fs7; < center frequency 2 (end) p6 .25; < attack time p7 .25; < decay time < all bandwidths are multipliers for center frequency p8 nu 5./ .2/ 8.; < bandwidth 1 (beginning) p9 nu 2./ 9./ 1.; < bandwidth 2 (middle of note) p10 nu .1/ 3./ 10.; < bandwidth 3 (end of note) end; -----------------------------------------------------------
Many acoustic sounds, including the human voice, most aerophones and chordophones, and many membranophone and other percussive sounds, produce complex frequency spectra resulting from the amplifaction and filtering of source vibrations by a resonator. The resonator vibrates sympathetically, greatly increasing the amplitude, but in a frequency selective manner. responding much more to some frequencies than to others. Complex resonators such as the sounding board of a piano or harp, or the wooden body of a violin or cello, have many resonances of varying strengths, and occasionally one or two antiresonances as well. The complex frequency response of such resonators may resemble the silhouette of a descending mountain range -- a combination of low pass filtering with many narrow band pass "spikes." The air inside the violin or cello body acts as a simple resonator, producing a formant (strong resonance) within a fairly narrow pass band. Similarly, the throat, mouth, tongue and nasal passages sharply filter human speech and singing, providing both individual vocal timbres (that help us distinguish the voice of Pavarotti from the voice of, say, Mick Jagger) and also creating the formants we associate with particular vowels.
By splitting an audio signal into several paths, each routed through a band-pass filter to produce a particular formant, we can simulate such complex resonant responses. The filters generally should be applied in parallel (with the input to each the unfiltered original signal), since if applied in series the filters largely would cancel each other out.
[ See the discussions of TABLE and of function generator GEN2 in the Csound reference manual]
Rather than typing in center frequencies and bandwidths repeatedly for several bandpass filters in our scores, it is generally more efficient to create tables of these values. The numbers within such tables can then be read in to the filter arguments as needed.
Unit generator table provides this capability to read in raw values from
a function table.
(table is the non-interpolating sibling of tablei,
which we employed in ex3-6 to read in fuction tables filled with
soundfile samples.)
The two required arguments to table are:
Function generator gen02 allows us to type in the exact values we wish to place within a table. If we want a table of five numbers, we need a table size of "8" (since function tables must be powers-of-two or powers-of-two plus one). One other minor problem is that by default most Csound gen routines, including gen 2, normalize the values within the tables they create to a maximum value of floating point "1." Thus, the following call to gen2
would be normalized, with "1720" becoming "1." in the table and the first four values scaled proportionately. Preceding the call to gen02 with a minus sign, however :
will cause the normalization procedure to be skipped, giving us precisely the five integer values we have asked for in locations 0 through 4 of function table "90."
With all of the foregoing clear (yes?), we present the following instrument algorithm and score, in which we run an alternating series of white noise, pulse train and digitized cymbal audio signals through a filter network that imposes a series of vowel-like formants upon these three sound sources.
; ############################################################# ; soundfile ex5-6 : 5 band-pass filters used to create vowel-like formants ; ############################################################# sr = 44100 kr = 2205 ksmps = 20 nchnls = 1 instr 1 kamp envlpx p5, p6, p3, p7, 1, p8, .01 ; amplitude envelope ; ==== get source audio signal, determined by p9 :white noise, pulse train or cymbal if p9 > 0 goto pulse ; if p9 is 0, use white noise for audio source signal asource rand kamp ; white noise source signal goto filter ; - - - - - - - - - - - pulse: if p9 = 2 goto soundfile ; if p9 = 1 , generate a pulse train source signal : ipitch = cpspch(p4) ; pitch for buzz if ipitch > 500 igoto hinote ;setting number of harmonics in buzz - iharmonics = 20 ; use 20 harmonics for pitches below Bb 4 igoto makebuzz hinote: iharmonics = int((sr*.5)/ipitch) ;for higher pitches, as many partials as ;possible up to the nyquist frequency makebuzz: asource buzz kamp, ipitch, iharmonics, 100 ; pulse train source signal goto filter ; - - - - - - - - - - - soundfile: ; if p9 = 2 , use soundfile for source audio signal asource soundin p12 , p13 asource = asource * (kamp/32767) ; impose new envelope on soundfile ; - - - - - - - - - - - filter: ;FILTER CENTER FREQUENCIES & RELATIVE AMPS. FROM FUNCTIONS 80-87, 90-94 iformant1 table 0, p10 ; 1st formant frequency iformant2 table 1, p10 ; 2nd " " " iformant3 table 2, p10 ; 3rd " " " iformant4 table 3, p10 ; 4th " " " iformant5 table 4, p10 ; 5th " " " iamp1 table 5, p10 ; relative amplitude of 1st formant iamp2 table 6, p10 ; " " " " 2nd iamp3 table 7, p10 ; " " " " 3rd iamp4 table 8, p10 ; " " " " 4th iamp5 table 9, p10 ; " " " " 5th ; 5 BANDPASS FILTERS TO SUPPLY THE 5 FORMANTS a2 reson iamp1*asource, iformant1, 1.2*p11 * iformant1,1 a3 reson iamp2*asource, iformant2, 1.05*p11 * iformant2,1 a4 reson iamp3*asource, iformant3, .9*p11 * iformant3,1 a5 reson iamp4*asource, iformant4, .8*p11 * iformant4,1 a6 reson iamp5*asource, iformant5, .7*p11 * iformant5,1 aformants = a2 + a3 + a4 + a5 + a6 aout balance aformants , asource ; restore amplitude lost in filtering out aout endin ----------------------------------------------------------- < score11 file for ex5-6 * f1 0 65 7 0 40 .75 4 .70 20 1.; < envlpx rise function * f100 0 2048 10 1.; < sine wave for "buzz" < tables of resonances for vowels < indices 0-4 = frequencies,indices 5-9 = relative amplitude of these frequencies < functions 80-87 for MALE voice * f80 0 16 -2 609 1000 2450 2700 3240 1. .25 .063 .07 .004; < A as in hot * f81 0 16 -2 400 1700 2300 2900 3400 1. .125 .18 .07 .01; < E as in bet * f82 0 16 -2 238 1741 2450 2900 4000 1. .01 .029 .01 .001; < IY as in beet * f83 0 16 -2 325 700 2550 2850 3100 1. .063 .002 .007 .002;< < O as in beau * f84 0 16 -2 360 750 2400 2675 2950 1. .063 .001 .003 .001; < OO as in boot * f85 0 16 -2 415 1400 2200 2800 3300 1. .063 .027 .015 .002; < U as in foot * f86 0 16 -2 300 1600 2150 2700 3100 1. .037 .063 .031 .005; < ER as in bird * f87 0 16 -2 400 1050 2200 2650 3100 1. .064 .011 .009 .001; < UH as in but < functions 90-94 for FEMALE voice * f90 0 16 -2 650 1100 2860 3300 4500 1. .16 .05 .063 .012; < A as in hot * f91 0 16 -2 500 1750 2450 3350 5000 1. .125 .1 .041 .005; < E as in bet * f92 0 16 -2 330 2000 2800 3650 5000 1. .042 .08 .11 .012; < IY as in beet * f93 0 16 -2 400 840 2800 3250 4500 1. .063 .003 .004 .001; < O as in beau * f94 0 16 -2 280 650 2200 3450 4500 1. .015 .001 .001 .001; < OO as in boot i1 0 39.; p3 rh 4; du .95; p4 no ef3*24/f4*15; p5 12000; p6 .2; < envlpx attack time p7 .3; < decay time p8 .7; < atss p9 nu 0/1/2; < audio signal switch : 0 = buzz,1 = noise, 2 = soundfile p10 nu 80*3/81*3/82*3/83*3/84*3/85*3/ < vowel function 86*3/87*3/90*3/91*3/92*3/93*3/94*3/; p11 .05; < bandwidth (scaled by register) p12 8; < soundin.# number - used only when p9=2 {soundfile input} < soundin.8 points to /sflib/perc/cym1 p13 0; < optional skip into p12 soundfile end; -----------------------------------------------------------
The vowel functions in the score above are available in the Eastman Csound Library file vowelfuncs for your use (or abuse). To obtain a copy, type
In ex5-6, the center frequencies and bandwidths of the vowel-like resonance both remain fixed, and the audible result is thus somewhat "flat." If, instead, we "move these resonances around" somewhat, by applying random deviation or period control signals to the center frequencies, bandwidths or both, the resonances may have more "life."
[ See the discussions of COMB, ALPASS and REVERB in the Csound reference manual]
comb and alpass filters send an audio signal through a delay line that includes a feedback loop. Very short delay times (less than 40 milliseconds, and often less than 10 ms.) are normally used, resulting in multiple repetitions (too fast to be heard as echos) which fuse together to form a reverberant response. The arguments to both of these unit generators are :
Comb filters tend to add strong coloration, often of a metallic quality, to an audio signal. Owing to the fixed delay and loop (feedback) time, the reiterations of some frequencies will be in phase (and thus increased in amplitude), while other frequencies will be out of phase, leading to partial or total cancellation. The resulting frequency response of the filter is an alternating series of equally-spaced, equal-strength peaks and nulls. If graphed, this response looks somewhat like the teeth of a comb, but actually more like a repeating triangle wave. The number of peaks is equal to
Thus, with a sampling rate of 44100 and a loop time of .02, the response of the comb will include 441 peaks and 441 nulls (.02*44100/2). Each peak (and each null) will be spaced 50 herz apart, from 0 herz to the Nyquist frequency (here, 22040 herz). Since these peaks are harmonically related, the output of comb often will have a pitched twang at a fixed frequency of 50 herz (see table below). This is highly undesirable if one wants "natural-sounding" reverberation, and comb filters are a poor choice for this purpose. Rather, they are useful for particular coloristic effects, and as building-blocks in the construction of more sophisticated reverberators.
The following table indicates the frequency of the lowest peak for comb filters with delay and feedback times between 1 and 20 milliseconds. All other peaks are harmonic multiples of this frequency(2*, 3*, etc.). Note that although the number of peaks varies with dif- ferent sampling rates (here, SR = 44100 and 22050), the sampling rate has no effect on the frequencies of the peaks (4th column) for any given delay-loop time.
loop-feedback time SR = 44100 SR = 22050 Frequency of lowest peak # of peaks # of peaks (and spacing between peaks)% .001 22.05 11.025 1000Herz .002 44.1 22.05 500 .003 66.15 33.075 333.33 .004 88.2 44.1 250 .005 110.25 55.125 200 .006 132.3 66.15 166.67 .007 154.35 77.175 166.67 .008 176.4 88.2 142.86 .009 198.45 99.225 111.11 .01 220.5 110.25 100 .02 441. 225. 50
A delay/feedback time of .0015 would produce a fundamental peak of 666.67 herz.
In the orchestra file used to create soundfile example ex5-7, we read in a soundfile (in this example, a portion of sflib/x soundfile voicetest), and then process this input with a comb filter. One problem when using any reverberant signal processor (such as Csound unit generators comb, alpass and reverb) is that the output duration must be longer than the duration of the input signal, in order to allow the concluding reverberant signal (which continues after the input has died away) to decay completely to zero amplitude. If we do not allow this extra time for the trailing reverberation, the end of each input sound may seem abrupt, and the loudspeakers may produce a click or pop.
Within the score file below, we set the output duration to .3 seconds longer than the desired audio input. Within the orchestra file, we create an amplitude envelope that allows us to control the output gain (the variable iamp), and, optionally, to apply a fade-in and/or fade-out to the input signal. At the end of this envelope, we tack on .3 seconds of silence (shown here in boldface), to mute any input signal and allow the reverberation to decay:
; ############################################################# ; soundfile ex5-7 : Comb filter Csound Tutorial ; ############################################################# sr=44100 kr=2205 ksmps=20 nchnls=1 ; p4 = soundin.# sflink number ; p5 = amplitude multiplier ; p6 = skip from front of soundfile ; p7 = exponential fade in time ; p8 = exponential fade out time ; comb filter: p9 = comb reverb time , p10 = comb loop time instr 1 inputdur = p3 - .3 ; duration of audio input (.3 seconds shorter than output duration) audio soundin p4 , p6 ; output amplitude envelope: includes optional fade-in & fade-out and ; .3 seconds at tend to mute any input signal iamp = (p5 = 0 ? 1: p5 ) ifadein = (p8 =0?.001:p8 ) ifadeout = (p8 =0?.001:p8 ) amp expseg .01,ifadein ,iamp,inputdur-(ifadein +ifadeout ),iamp,ifadeout ,.01 , .3 , .001 audio = audio * amp aout comb audio ,p9, p10 out aout endin ----------------------------------------------------------- < score11 file used to create soundfile " ex5-7" : < Score11 file used to create Eastman Csound Tutorial soundfile examples < ex5-7 & ex5-8: comb & alpass filter examples i39 0 0 7; < mono input, default mono output p3 2; du 301.8; p4 16; < soundin.# : soundin.16 points to /sflib/x/voicetest" p5 .5; < ampfac p6 < duration skipped from front of sf p7 < fade in time p8 .05; < fade out time < User-added parameters: p9 nu 0/.1/.25/.7/.3///; < comb or alpass reverb time p10 nu .002////.005/.017/.033; < comb or alpass loop time end; <<<<end of score>>>>>>>>>>>>> -----------------------------------------------------------
With alpass filters, the peaks and nulls, and the resulting coloration of the input sound, are less pronounced, especially when the reverberation time is low. ex5-8 was created by the same orchestra and score files as ex5-7, except that an alpass filter was substituted for the comb filter.
; ############################################################# ; soundfile ex5-8 : Alpass filter of above score ; #############################################################
In both of the preceding examples, 100 % of the input soundfile was sent through the comb or alpass filter. More often, however, only a portion of the input signal (say, somewhere between 20% and 60 %) is sent to comb or alpass, and the remaining direct signal is sent straight out.
Csound unit generator reverb combines four comb filters in parallel followed by two alpass filters in series :
-----comb 1-------- | | |-- -comb 2-------| | | audio input signal ->> + --alpass1---alpass2 --> out | | |----comb 3-------| | | |----comb 4-------|
Each of the filters has a different, prime number loop time (none of which are related by simple ratios). As a result, the overall frequency response is relatively flat, without the obvious pitch coloration of individual comb filters. reverb has only two arguments: the audio signal input, and the reverberation time.[1]
The design of opcode reverb, now about thirty years old, is not very sophisticated by today's standards. The rather cheesy-sounding reverberant signal often includes an annoying flutter or twang, and I do not recommend using this unit generator.
The more recent unit generator NREVERB (introduced in Csound version 3.48) and its similar immediate (but no longer recommended) predecessor reverb2, incorporate six comb filters in parallel summed through five alpass filters in series to provide better audio quality, although the output of these opcodes is unlikely to be confused with the ambience of, say, Carnegie Hall, nor of a top-of-the-line $5000 Lexicon hardware reverberation unit. nreverb and reverb2 share the same argument syntax and similar processing operations, and produce similar reverberant qualities, although nreverb tends to sound a little "wetter" in my experience. However, nreverb is equally usable at all sampling rates, whereas reverb2 tends to introduce more coloration at many sampling rates. (Those running older versions of Csound can substitute reverb2 for nreverb in the examples that follow.)
nreverb and reverb2 include a useful additional parameter, labeled khdif in the Reference Manual, that attempts to simulate control of high frequency diffusion (how quickly high frequencies decay relative to lower frequencies). In natural (avoustic) room reverberation, high frequencies almost always decay more quickly than lower frequencies. The more sound aborptant material in a room, and the larger the size of the room, the greater the discrepancy between high and low frequency decay rates. High khdif values, between about .8 and 1. (the maximum usable value), tend to produce a "drier", "pingier" (more "staccato") reverberant ambience, like a room with high absorptive coefficients (e.g. a room with thick rugs and drapes and many "soft," porous surfaces). When khdif is set to a low value (between about .2 and 0), the reverberation is "wetter," "brighter" for sounds with high frequency spectra and "boomier" for lower pitched sounds, simulating the ambience of a room with hard, reflective surfaces (e.g. cement block walls). Think of the reverberation time (ktime) argument as determining the size of the reverberant room and the diffusion (khdif) parameter as determining the "acoustical treatment" of this room and season to taste. khdif values between about .2 and .5 are most common.
The ktime (reverberation time, or "room size") argument can be varied to good effect with control rate signals to vary the reverberant ambience over time. Since the khdif argument of nreverb also is a k-rate variable, it would seem to be possible to vary this diffusion ("room brightness") parameter as well with a control signal. Currently, however, time varying changes applied to the khdif value often will produce artifacts -- often a sustained, high-pitched buzzing, unrelated to the source audio signal. Owing to this bug, I recommend that you treat this variable as an i-rate argument.
Without too much trouble, we could modify the orchestra and score files of ex5-7 to add post-processing reverberation, rather than comb filtering, to our soundfile. We would substitute unit generator nreverb for comb within the orchestra file, and perhaps set score p-field 10 to control the percentage of "wet" (reverberated) vs. "dry" (non-reverberated) signal. We might also change the output to stereo, and add another score p-field to control the left-right pan location of each output note:
awet nreverb audio ,p9 , p12; apply reverberation to input signal aout = (p10 * awet) + ((1. - p10) * audio) ; set wet-dry mix outs sqrt(p11) * aout, sqrt(1. - p11) * aout ; left-right pan location
Then, with random selection score values for p9, p10, p11 and p12 such as the following
p9 1. .1 2.5 ; < randomly vary reverb time between .1 and 2.5 seconds p10 1. .1 .9; < vary "wet"signal % between 10 and 90 % p11 1. .05 .95; < vary % of signal sent to left channel between 5 & 95 % p12 1. .2 .6; < reverb high freq. diffusion
we could randomly place each output note in a different left-right and "close-distant" ("dry-wet") location, and also vary the reverberation time and high frequency diffusion ("room brightness") to place each "note" in a "different hall."
Generally, however, we do not wish to vary all such post-processing operations so radically from one note to the next, but rather wish to mix together all of the notes being produced by a given instrument, and then apply a single post-processing operation (such as reverberation) to this mix. In order to perform such "global" signal processing operations with Csound, we need to create a separate global instrument within our orchestra file.
Global instruments often do not generate any audio signal themselves, but instead are often used for post-processing, modifying audio signals that have been generated and mixed together by other instruments within the orchestra file. (Occasionally, however, global instruments are employed by advanced users instead as pre-processors, establishing certain variables that will be accessed by other instruments within an orchestra file, or even turning copies of these other instruments on and off.) There are a few unique aspects to dealing with global instruments that we have not yet encountered.
Local and Global Variables [ See the discussion of CONSTANTS AND VARIABLES in the Csound reference manual ]
All of the i-rate, k-rate and a-rate variables we have looked at so far, with the exception of the header arguments sr, kr, ksmps and nchnls, have been local variables. Local variables, such as "p4," "ipitch," "kenv," "kmart" (remember?) and "a1," are only used, and can only be accessed, by one copy (created by one note event, or I statement, within the score file) of one instrument. Several copies of an instrument can be "playing" simultaneously ("polyphonically"), each with a different ipitch and/or kenv value, without colliding. This is because the ipitch or kenv value for each copy is written to a unique ("local") RAM memory location that can only be accessed by this copy.
After Csound computes a sample for one instrument copy, it zeros out all of the a-rate local variables, creating these values from scratch on each sample pass. When it completes a control (k-rate) cycle of samples for an instrument copy, it zeros out all the local k values, updating them at the beginning of the next k-rate pass.
Global variables, by contrast, can be accessed by all copies of all instruments currently in memory. Global variables can be created and updated at the i-rate, k-rate or a-rate, and are preceded by a "g", with output variable names such as gi1 (a global initialization variable), gkpitch, (a global k-rate variable) and gaudio. The values a1 and ga1 within an instrument are entirely distinct.
One other important distinction must be noted : global k-rate and a-rate values are not zeroed out at the end of each control or sample pass. Rather, these values are CARRIED OVER from one pass to the next. For this reason, it is necessary to initialize these variables -- to set them to some initial value (most often to zero) -- with statements such as :
All global variables that will be used in an orchestra should be initialized BEFORE the first i-block (immediately after the header and before the first instrument definition). These variables can then be accessed and operated upon by any instrument, through such operations as:
This means : Add the current value of local variable audio to the current value of global variable ga1, then assign the result as the new value of ga1
After all processing has been completed at the end of a sample calculation, or at the end of a control (k) rate cycle of sample calculations, it often is necessary to zero out global control and audio signals to prevent feedback, like this:
The following orchestra file, which employs the Eastman Csound Library macros SETUP and MARIMBA , uses Library instrument algorithm marimba to generate a series of notes. In place of the default audio output of this algorithm
To convert this macro file into a usable Csound orchestra file, ECMC users can type
; ########################################### ; orchestra file for examples ex5-9-0, ex5-9-1, ex5-9-2 and , ex5-9-3 ; ECMC Csound Library algorithm marimba with added global reverberator ; ########################################## sr= 44100 kr=2205 ksmps=20 nchnls=1 ga1 init 0 ; initialize global variable MARIMBA([ ; send output to global reverberator ga1 = ga1 + a1]) instr 99 < global reverberation instrument krevamount line p4, p3, p5 ; % signal to be reverberated adry = (1 - krevamount) * ga1 ; direct signal -- no reverberation irevtime = p6 ; reverberation time ; awet reverb2 krevamount * ga1 , irevtime, p7 awet nreverb krevamount * ga1 , irevtime, p7 out awet + adry ; output reverberated plus direct signals ga1 = 0 ; clear global variable endin
The global reverberation instrument above allows us to vary over time the percentage of the input signal to be reverberated (the "wet/dry" mix, controlled by p4 and p5 in the score11 input file for instr 99), but the reverberation time (controlled by p6) and the high frequency diffusion (controlled by p7) remain fixed.
We have provided four similar score11 input files, and resulting compiled soundfile examples in /sflib/x, that employ this orchestra file: ex5-9-0, ex5-9-1, ex5-9-2 and ex5-9-3. The score11 input for the marimba algorithm is identical in each of these four score generating files -- a rather didactic "melody," encompassing a wide pitch range (so that the reverberant ambience added by nreverb can be heard in various pitch registers), that sounds as though it were being performed by a rather tired or disinterested percussionist. In ex5-9-0, so called because it contains no reverberation, both the beginning and ending "wet/dry" mix values are set to zero, resulting in a 100 % dry mix (we are hearing only the output of the marimba instrument).
< ESM Csound LIbrary Tutorial score11 input file >> ex5-9-0 << < Library algorithm MARIMBA with global reverberator < NO reverberation : 100 % dry signal output * f100 0 1024 10 1.; < SINE WAVE MARIMBA 0 8; rseed 888; rd .015; p3 se 7. 1. .5 .33 .25 .1; du 302; p4 se 3 d1 c2 b2 gs3 e4 a4 ds5 bf5 g6; p5 mx 8. 2000 6000 12000; < Amplitude p6 mx 8 .03 .01 .005 .01; < Attack time: normal range .01-.04 p7 mo 8. .7 1. 1.5; < Attack hardness(1. ord; range .7-1.5) p8 mo 8. .25 .6 1.5; < Brightness(1. ord; range .25 - 1.5) p9 0; < microtonal detuning {not used in this example} end; i99 0 0 1; < global reverberation instrument p3 9.5; < wet/dry mix : % of signal sent to reverberator p4 0; < beginning % sent to nreverb p5 0; < end % sent to nreverb < reverberation time & brightness : p6 1.1; < reverberation time p7 .02; < khdif high freq. diffusion; 0 - 1. end; <<>>>>>>>>>>>
score11 input file ex5-9-1 is identical to ex5-9-0 except
that the beginning and ending wet/dry mix arguments to the reverberator
are set to .33. Thus, throughout the resulting soundfile, 1/3 of
the marimba signal is routed through the reverberator, and
2/3 bypasses the reverberator.
The khdif argument in p7 is very low (.02).
Because both the highest and lowest
frequencies within the reverberant signal decay at an almost identical
rate of 1.1 seconds (p6), the ambience is "bright," "brittle"
and fairly "wet."
< ESM Csound LIbrary Tutorial score11 input file >> ex5-9-1 <<
< 1.1 second reverb time; 1/3 of signal is reverberated ; low khdif value
< 1/3 of signal is reverberated ; low khdif value
(The marimba input is identical to that in ex5-9-0)
i99 0 0 1; < global reverberation instrument
p3 9.5;
< wet/dry mix : % of signal sent to reverberator
p4 .33; < beginning % sent to nreverb
p5 .33; < end % sent to nreverb
< reverberation time & brightness :
p6 1.1; < reverberation time
p7 .02; < khdif high freq. diffusion; 0 - 1.
end; <<>>>>>>>>>>>
score11 input file ex5-9-2 is identical to ex5-9-1 except that the khdif high frequency diffusion value is set almost to its maximum allowable value
p7 .98; < khdif high freq. diffusion; 0 - 1.
Finally, in ex5-9-3, the wet/dry mix increases (from 1 % "wet" at the beginning of the outputsoundfile to 74 % "wet" at the end), and the khdif argument is set to a typical, "neutral" ("not too bright, not too dull") value of .3 :
< Score11 file used to create Eastman Csound Tutorial soundfile example ex5-9-3: < % of wet signal increases from 1 % to 74 %; medium low .3 khdif value (The marimba input is identical to that in ex5-9-0) i99 0 0 1; < global reverberation instrument p3 9.5; < wet/dry mix : % of signal sent to reverberator p4 .01; < beginning % sent to nreverb p5 .74; < end % sent to nreverb < reverberation time & brightness : p6 1.1; < reverberation time p7 .3; < khdif high freq. diffusion; 0 - 1. end; <<>>>>>>>>>>>
Stereo global instruments are also possible, and a global instrument can pass all or part of its output to another global instrument for additional post-processing. To change the orchestra file for the ex5-9 examples from mono to stereo, we would need to create two nreverb units (one for each channel), each fed by its own global audio signal, and add another p-field (p10) to the marimba instrument and its score to determine the left-right spatial placement of each note. These alterations are incorporated within the orchestra file for ex5-10 below. In addition, ex5-10 also adds two identical multitap delay lines, one for each stereo output channel, with the Csound mutitap unit generator. In this example, four echos are applied to the dry left and right channel audio signals, but not to the reverberant signals created by the calls to nreverb.
; ############################################################# ; soundfile ex5-10 : gloabl stereo reverberation and echo instrument ; ############################################################# sr= 44100 kr=2205 ksmps=20 nchnls=2 galeft init 0 ; initialize left channel global variable garight init 0 ; initialize right channel global variable MARIMBA([ ; send output to global stereo reverberator galeft = sqrt(p10) * a1 + galeft garight = sqrt(1. - p10) * a1 + garight]) instr 99 ; global reverberation instrument krevamount line p4, p3, p5 ; % signal to be reverberated adryleft = (1 - krevamount) * galeft ; direct signal -- no reverberation adryright = (1 - krevamount) * garight ; direct signal -- no reverberation irevtime = p6 ; reverberation time awetleft nreverb krevamount * galeft , irevtime, p7 awetright nreverb krevamount * garight , irevtime, p7 ; add 4 echos, but only to dry signals aechosleft multitap adryleft, p8,p9, p10,p11, p12,p13, p14,p15 aechosright multitap adryright, p8,p9, p10,p11, p12,p13, p14,p15 outs awetleft + adryleft + aechosleft, awetright + adryright + aechosright galeft = 0 ; clear global variable garight = 0 ; clear global variable endin ----------------------------------------------------------- < Score11 file used to create Eastman Csound Tutorial soundfile example ex5-10: * f100 0 1024 10 1.; < SINE WAVE (The marimba input is identical to that in the ex5-9 examples, but p10 has been added to control stereo placement.) MARIMBA 0 8; rseed 888; rd .015; p3 se 7. 1. .5 .33 .25 .1; du 302; p4 se 3 d1 c2 b2 gs3 e4 a4 ds5 bf5 g6; p5 mx 8. 2000 6000 12000; < Amplitude p6 mx 8 .03 .01 .005 .01; < Attack time: normal range .01-.04 p7 mo 8. .7 1. 1.5; < Attack hardness(1. ord; range .7-1.5) p8 mo 8. .25 .6 1.5; < Brightness(1. ord; range .25 - 1.5) p9 0; < microtonal detuning {not used in this example} p10 1. .05 .95; < left-right stereo spatial placement end; i99 0 0 1; < global reverberation instrument p3 11.0; < % of signal sent to reverberator : p4 .01; < beginning % sent to reverb p5 .74; < end % sent to reverb < reverberation time & brightness : p6 1.1; < reverberation time p7 .3; < khdif high freq. diffusion; 0 - 1. < multitap delay line : 4 echos : delays times and gains p8 .091; < echo 1 delay time p9 .6; < echo 1 gain p10 .41; < echo 2 delay time p11 .3; < echo 2 gain p12 1.17; < echo 3 delay time p13 .17; < echo 3 gain p14 1.95; < echo 4 delay time p15 .07; < echo 4 gain end; <<>>>>>>>>>>>
The four echos created by the left and right channel multitap unit generators have delays times and gains of
delay gain echo 1 .091 seconds .6 * amlitude of dry signal echo 2: .41 seconds .3 * amlitude of dry signal echo 3: 1.17 seconds .17 * amlitude of dry signal echo 4: 1.95 seconds .07 * amlitude of dry signalTo accomodate these delays, we have increased the duration of the "note" played by instr 99 in our score file from 9.5 seconds (as in the ex5-9 scores) to 11 seconds.
Note that although the ex5-10 score input for marimba is identical to that in the four ex5-9 examples except for the inclusion of the new p10 stereo placement parameter, the "tune" played by marimba -- its rhythms, pitches and note articulations -- differs noticeably from that of the ex5-9 examples. The addition of this new p-field introduces additional pseudo-random number generation operation, which alter the note-by-note, parameter-by-parameter seed values for marimba p-fields 2 through 8.
While listening to the ex5-9 and ex5-10 examples, your ear will tell you that Csound unit generator nreverb is not an extremely high quality reverberator, but rather comparable in quality to what one typically finds in a hardware effects box costing a few hundred dollars. nreverb thus can be serviceable for certain uses if one doesn't push it too hard (i.e. if one uses fairly low reverberation time and wet/dry mix values), but on the ECMC SGI systems, better audio quality generally can be obtained by using the programs space, place, and move. Dissatisfaction with the generic reverb and nreverb unit generators has led some advanced Csound users to construct their own reverberation instrument algorithms from combinations of comb, alpass, low pass filter, delay line and other unit generators. Illustrative reverberation instrument algorithms are available within the example files bundled with the Csound distribution, and at the web sites of some Csound users. Eric Lyon, for example, has made several of his Csound reverberation instruments available at http://ringo.sfc.keio.ac.jp/~eric/csoundinst/REVERB/
Csound includes additional methods by which advanced users can add reverberant ambience to sounds, but these alternative resources generally are more compex to use. Two of these alternatives, employing convolution and head related transfer function procedures, may already be familiar to users familiar with Tom Erbe's Macintosh Sound Hack application and some other DSP packages available for Unix, Windows and Macintosh systems.
Unit generator CONVOLVE employs Fast Fourier Transform (fft) procedures similar to (though considerably faster than) convolution procedures) to filter an input soundfile through the time varying impulse response characteristics of a particular concert hall or other acoustic space. Several steps are necessary to accompish this operation.
Unit generator HRTFER employs a haed related transfer function algorithm to create the illusion of spatially localized binaural outputs of monophonic input soundfiles. Here, too, a sepcial format hrtf file must be created or obtained before this unit generator can be used. Furthermore, the aural results of hrtfer generally are more effective when heard over headphones than over loudspeakers, which limits its utility.
NEXT CHAPTER (Chapter 6) -- Table of Contents CHAPTER 1 -- CHAPTER 2 -- CHAPTER 3 -- CHAPTER 4 -- CHAPTER 5 -- CHAPTER 6 -- APPENDIX 1 -- APPENDIX 2