Eastman Csound Tutorial
NEXT CHAPTER (Chapter 4) --   Table of Contents
CHAPTER 1  --  CHAPTER 2  --  CHAPTER 3  --   CHAPTER 4  --  CHAPTER 5  --  CHAPTER 6  --  APPENDIX 1  --  APPENDIX 2

Chapter3
AMPLITUDE and FREQUENCY MODULATION ; PROGRAMMING CONTROLS

By now we have begun to patch together unit generators and other mathematical operations and utilities available within Csound to form more complex, and therefore (we hope) musically more interesting instrument algorithms. In an example within the last chapter, we used a control oscillator to create a tremolo, or sub-audio amplitude modulation signal. In the following pages, we will look first at somewhat more sophisticated amplitude modulation procedures, and then at frequency modulation procedures. Some of you likely already are familiar with these concepts, others not. If you have little prior experience with modulation procedures, or erroneously surmise that they might involve moving from G minor to B-flat major, you may find it helpful to consult the tutorial introductions within Computer Music: Synthesis, Composition and Performance by Charles Dodge and Thomas Jerse (on reserve at Sibley for CMP 421-2) or within in various other computer and electronic music texts, to supplement or clarify the material presented here.

3.1. Amplitude Modulation

Below is a sample amplitude modulation instrument, and a score for it to play. The amplitude modulation here occurs here in the audio rather than the sub-audio frequency range. Thus, instead of producing amplitude beats (tremolo), it will alter the timbre produced by the audio oscillator, producing sum and difference tone sideband frequencies between the modulating and carrier oscillator waveforms. The frequency ratio between the modulating and carrier oscillators remains constant throughout a note (the pitches of both oscillators remain fixed). However, the depth (percentage) of the amplitude that is modulated is varied by means of a control oscillator, resulting in time varying timbral changes.

(The lines of Csound code have been numbered in this example so that we can refer quickly to particular features. Obviously, such line numbers would not be included in an actual Csound orchestra file.)
;  #############################################################
;  soundfile ex3-1  :    Orchestra file used to create this soundfile
;  #############################################################
1   ; Audio-Rate Amplitude Modulation Instrument :
2   ;  p3 = duration   p4 = pitch   p5 = amplitude  p6 = Attack time  p7 = decay time
3   ;   p8 = ratio of modulating freq. to p4
4   ;   p9 = 1st  amplitude modulation %
5   ;   p10 = 2nd  amplitude modulation %
6   ;   p11 = Function number controlling change in a. m. %
7   ;   p12 = Function number for modulating oscillator

8   sr = 44100
9   kr = 2205
10  ksmps = 20
11  nchnls = 1

12  instr 1
13  ipitch = cpspch(p4)
14  kenv  linen  p5,  p6,  p3,  p7            ; amplitude envelope
15  ; amplitude modulation :
16      kmod oscili (p10 - p9),  1/p3,  p11  ; difference between 1st & 2nd a. m. %
17         kmod = (kmod + p9) * kenv           ; % of signal to be amp. modulated
18         knomod = kenv - kmod                  ; % of signal NOT amp. modulated
19      ampmod  oscili  kmod, ipitch * p8, p12   ; modulating oscillator

20  audio  oscili  ampmod + knomod, ipitch , 1   ; carrier oscillator
21      out audio
22  endin
-------------------------------------------------------


Score for above instrument :
-------------------------------------------------------
< score for ex3-1 : audio-rate amplitude modulation
 < audio waveform functions :
*f1 0 1024 10 1.;             < sine wave
*f2 0 1024 10 0 1. .3 0 .15;  < more complex wave, harmonics 2,3,5
< control functions for change in amplitude modulation % {p12}
*f10 0 65 7  0  64  1. ;         < linear change between p9 & p10
*f11 0 65 5  .01  64  1.;        < exponential pyramid change between p9 & p10
*f12 0 65 7  0  32  1.  32  0;   < linear pyramid  {p9 to p10 to p9}
*f13 0 65 5  .01  32  1.  32  .01;  < exponential pyramid  {p9 to p10 to p9}

i1 0 0 4;           < 4 output notes
p3 4;                    < start  times
du .95;
p4 no c4;                < pitch
p5 10000;                < amplitude
p6 .2;                   < attack time
p7 .5;                   < decay time
p8 nu 1./ 1.4 / .255/ 3.3;    < ratio of modulating frequency to p4
p9 nu .05/ .05 ;            < 1st % of a.m.
p10 nu .95/ .95;            < 2nd % of a.m.
p11 nu 10/ 11 / 12/ 13 ;   < function number for change in a.m. %
p12 nu 1//  2 // ;         < function number for modulating oscillator
end;
-------------------------------------------------------

Appendix Csound score file examples : Chapter 3

This instrument algorithm has been lavishly commented, almost to the point of fussiness and loquacity. However, we well may appreciate this documentation later, and anyone else who looks at our orchestra file will be grateful for the assistance. As your instruments get more complicated, it becomes harder to follow your own logic. What seems obvious today may cause head-scratching in a few weeks.

linen (line 15) creates a simple attack-sustain-decay amplitude envelope for each note. On lines 16-18 we split this original amplitude signal into two paths. kmod is the percent of the amplitude signal that will be modulated; knomod is the percent that will not be modulated. The greater the depth of modulation, up to a maximum of 1. (100 %), the greater the resulting strength of the sidebands, and thus the greater the timbral change.

Let's look more carefully at how the depth of modulation is varied :

15 ; amplitude modulation :
16   kmod oscili (p10 - p9), 1/p3, p11    ; difference between 1st & 2nd a. m. %
17   kmod = (kmod + p9) * kenv          ; % of signal to be amp. modulated
18   knomod = kenv - kmod                 ; % of signal NOT amp. modulated
19   ampmod oscili kmod, ipitch * p8, p12 ; modulating oscillator

The control oscillator kmod (line 16) computes the changing difference between the first and second modulation percentages. Changes in this value follow the shape of the function given in score p-field 11. The difference is then added to our initial (p9) percentage in line 17 and multiplied times the original amplitude signal. This gives us the total, time varying amplitude signal level that will be sent to the modulating oscillator. This amplitude is then subtracted from the original output of linen (line 18) to define the non-modulated amplitude signal, and the result is written to a RAM memory location we call knomod.

The modulating oscillator (ampmod) generates the modulating signal. It's three arguments are :

1) the amplitude to be modulated (signal kmod)
2) frequency : p8 times the (p4) frequency of the carrier oscillator
3) function : the audio waveshape function specified in score p-field 12 (a sine wave for the first two notes, a more complex waveform for the final two notes)

The actual amplitude modulation occurs within the carrier (audio) oscillator on line 20. The output of the modulator (ampmod) is added to the non-modulated amplitude (knomod) within the amplitude argument to the carrier oscillator.

Modulation techniques have been important resources in digital as well as analog sound synthesis. My friend James Dashow, for example, has composed works in which the timbre of almost every note results from amplitude modulation procedures. I suggest that you grab a copy of the orchestra and score files above, or of others in this chapter, and try adding some modifications or additions to these algorithms, or else create your own instrument algorithms and scores based upon the models provided here. In ex3-1, for example, you might replace the bare-boned linen envlope generator on line 14 with a more powerful amplitude envelope created with envlpx or expseg. (This may require adding some additional score p-fields. You might create some other audio waveforms in addition to, or in place of, f1 and f2 for the modulating oscillator, or change the carrier oscillator waveform from the current sine wave to something more complex. Or, the pitch of the modulating and/or carrier oscillators might be varied by a control signal. A stereo version of the instrument could be created with an added p-field for spatial localization between right and left channels :

outs sqrt(p13)*a1, sqrt(1-p13)*a1I

In soundfile example ex3-2, amplitude modulation, both at the sub-audio and audio rate, is applied to a soprano tone soundfile from the /sflib/voice directory.

;  #############################################################
;  soundfile ex3-2  :  amplitude modulation of a soundfile
;  #############################################################
; Orchestra file used to create this soundfile
-------------------------------------------------------
sr= 44100
kr = 4410
ksmps = 10
nchnls = 1

   ;  score p-fields:
   ;  p4 =  soundin.# number {source soundfile} , p5 = skip time
   ;  amplitude modulation :
   ;     p9 =  opening % of audio signal to be modulated
   ;     p10 =  closing % of signal to be modulated
   ;     p11 =  modulating frequency (can be sub-audio or audio rate)

instr 1
asig  soundin  p4, p5    ; read in a soundfile
   ; add a fade-in/fade-out amplitude envelope & multiplier
  p7 = (p7 = 0 ? .0001 : p7)  ; protect against illegal 0 values
  p8 = (p8 = 0 ? .0001 : p8)  ; protect against illegal 0 values
kamp expseg  .01, p7 , p6 , p3 - (p7 + p8) , p6 , p8 , .01
asig  =  asig * kamp

  ; apply amplitude modulation to these samples
 kmod line   p9,  p3  , p10      ;  controls % of signal to be modulated
 knomod =  (1. - kmod )        ; non-modulated
 ampmod   oscili  kmod , p11 , 1   ; amplitude modulation oscillator
asig =  (knomod * asig)  +  (ampmod * asig)
out  asig
endin
-------------------------------------------------------

Eastman Csound Tutorial



; Score file used to create soundfile "ex3-2"
*f1  0 2048 10 1 ;  <  sine wave
i1 0 0 7;   < create 7 output "notes"
p3 nu 6.5 * 3 /  3. * 4;            <
du nu 306.36 * 3/ 302.8 * 4;    < output duration
p4 4 ;         < soundin.# number : soundin.4 points to /sflib/voice/sop1.b3p5  nu 0 * 3/ 2. * 4;                      < skip time into soundfiles
<  fade-in/fade-out envelope and amplitude scalar
p6  .6;          < multiplier for output amplitude
p7 nu 0 * 3/ .1 * 4;             < fade-in
p8 nu 0 * 3 / .25 * 4;             < fade-out

< amplitude modulation :
p9 nu .4 / .6 / .95 *5 ;         < opening % of signal to be modulated
p10 nu 0 // / .95 * 4;    < ending % of signal to be modulated
p11 nu 6. / 15. /    < frequency of amplitude modulator oscillator
   123.47 / 164.8 / 246.9/  349.2 /  46.2;
end;
-------------------------------------------------------

Those of you familiar with the ECMC MIDI studio might note that the orchestra file algorithm used to create soundfile example ex3-2 is similar, in many respects, to the hardware architecture of the old, old, old (but still useful) 360 frequency shifter ("balanced modulator") in our MIDI studio. Both employ an "internal" sine wave oscillator to amplitude modulate an "external" audio signal. As is often the case, however, software synthesis presents us with some additional possibilities. For example, we could create alternative amplitude modulation Csound algorithms in which

3.2. Frequency (phase) Modulation

Frequency modulation has been a widely used resource in both MIDI and direct synthesis digital synthesis systems.[1] [1] In Yamaha dx, tx, tz and sy series synthesizers of the 1980s and early 1990s, for example, frequency modulation was the principal means of generating various types of timbres.

FM can be employed either to produce periodic pitch fluctuations (vibrato), or else a wide range of timbres. The classic article on uses of F.M. for timbral control is "The synthesis of Complex Audio Spectra by Means of Frequency Modulation" by John Chowning, who first explored these procedures in the 1970s.[2]

[2] This article is reprinted in Foundations of Computer Music, edited by Roads and Strawn, which is on reserve at Sibley. If you have not previously worked with F.M. techniques, or are not clear on how the procedure works, we recommend that you read this article up to the section on Implementation. Simpler, tutorial-level introductions to FM are also included within several computer music texts.

To summarize F.M. briefly :

One or more modulating oscillators are created, and the resulting control signal(s) then added to the sampling increment (S.I.) of a carrier (audio) oscillator. This added value to the S.I. causes the carrier oscillator to skip around, back and forth, within the table, rather than marching straight through the table from beginning to end for each pitch cycle. For this reason, digital F.M. is often called phase modulation, since positive and negative values from the modulating oscillator are added to the current phase (position) within the function table. The greater the amplitude of the modulating oscillator, the greater the forward and backward jumps within the function table, and the greater the resulting change in pitch or timbre. In "classical F.M.," the waveshapes of both the modulating and carrier oscillaotrs are sinusoids.

There are two distinct types of F.M. : sub-audio rate and audio rate. When the frequency of the modulating oscillator is in the sub-audio range (below about 16 herz), we can hear the individual variations in frequency. If the modulating oscillator is producing a smooth-shaped, symmetrical waveform, such as a sinusoid or triangle wave, a vibrato results. If the modulator is producing a square wave (with abrupt alterations between two values), a trill results.

When the frequency of the modulator is in the audio range, however, the resulting variations in the carrier frequency are so rapid that we can no longer hear them individually. Rather, sidebands (sum and difference tones) are created, resulting in a more complex timbre. The timbral change is usually more drastic than with amplitude modulation. Whereas a sine wave amplitude modulating another sine wave will produce a single set of sidebands (the sum and difference frequencies of the carrier and modulator), a sine wave frequency modulating another sine wave will often produce several sets of sum and difference tones:

c+m, c-m :(carrier frequency plus the modulating frequency ; carrier minus mod. frequency)
c+2m, c-2m :(carrier freq. plus double the mod. freq. ; carrier minus two times the modulating frequency)
c+3m, c-3m : (carrier freq. plus and minus three times the modulating freq.)
c+4m, c-4m : (carrier freq. plus and minus four times the modulating freq.)
etc.

The table below shows the first four upper and lower sideband frequencies that result when a modulating frequency of 200 herz is applied to carrier frequencies of

        100 herz (a c:m ratio of 1:2)
        101 herz (a c:m ratio of 1.001:2, or  1:1.980198)
         74 herz (a c:m ratio of approximately 2.7)

---------------------4th order--------------------
| -----------3rd order------------------ |
| | ------2nd order----------- | |
| | | ----1st order-- | | |
| | | | | | | |
| | | | (carrier) | | | |
c-4m c-3m c-2m c-m c c+m c+2m c+3m c+4m
---- ---- ---- ---- --- ---- ---- ---- ----
-700 -500 -300 -100 100 300 500 700 900
-699 -499 -299 -99 101 301 501 701 901
-726 -526 -326 -126 74 274 474 674 874

Now consider the following simple F.M. instrument and score :

 #############################################################
 soundfile ex3-3   :  Simple F.M. instrument   Csound Tutorial
 #############################################################
Orchestra file used to create this soundfile:
--------------------------------------------------------------
sr = 44100
kr = 2205
ksmps = 20
nchnls = 1

instr 1
   kenv expseg  1 , p6 , p5 , p3 - (p6+p7), p5, p7, 1 ; amplitude envelope
   ipitch = cpspch(p4)
   amod  oscili   p9*ipitch,  p8,  100        ; modulating oscillator
   acar  oscili  kenv,  ipitch + amod,  100   ; carrier oscillator
   out acar
endin
--------------------------------------------------------------
< score11 input file used to create  soundfile "ex.3-3" :
* f100 0 1024 10 1.;     < sine wave
i1 0 0 2;
p3 4;
p4 no a3;
p5 8000;                < amplitude
p6 .2;                  < attack time
p7 .5;                  < decay time
p8 nu 5./  440.;        < modulating frequency
p9 nu .03/  3.;          < depth of modulation (*p4 frequency)
end;
--------------------------------------------------------------

COMMENTS on ex3-3 :

First note (sub-audio F.M.) :

The amplitude of the modulating oscillator is set to p9*ipitch. (ipitch is the base frequency of the carrier oscillator, or 220 herz for both notes in the score.) Since the modulating oscillator is reading a sine wave table (function 100), its amplitude values for the first note will vary between +6.6 (.03 * 220) and -6.6 (-.03 * 220) at a rate of 5 herz (p8).

The output of the modulating oscillaotr is added to the base frequency value (ipitch, or 220 herz) of the carrier. The resulting pitch vibrato will vary sinusoidally between 226.6 and 213.4 herz, at a rate of 5 times per second.

Second note (audio-rate F.M.) :

The term modulation index is a measure of the depth, or degree, of frequency modulation, indicating how much "distortion" (or "non-linear waveshaping") is applied to the reading of the carrier function table. Technically, "modulation index" is defined as

the ratio of the peak frequency deviation to the modulating frequency.

In our example (the second note of ex3-3) this ratio, or index, is 1.5 :

       Peak Deviation / Modulating Frequency = Index
            660       /          440         =  1.5

3.2.1. Unit generator foscili

[ See the discussion of FOSCIL and FOSCILI in the Csound reference manual ]

Because "classical" F.M. procedures have been so common, Csound provides a unit generator, named foscili, which combines the modulating and carrier oscillators within a single line of code. The arguments to foscili are :

In example ex3-3 above, the modulating and carrier oscillators

   amod  oscili   p9*ipitch,  p8,  100        ; modulating oscillator
   acar  oscili  kenv,  ipitch + amod,  100   ; carrier oscillator
could be replaced by a call to foscili to create the second note of our score. The arguments would be :

acar foscili kenv, ipitch, 1, 2. , 1.5, 100

3.2.2. Carrier to Modulator Ratios

Simple c : m ratios in which the modulating frequency is some integer multiple of the carrier (for example, 1:2, 1:3, and so on) will create pitched sounds at the carrier frequency. Similar ratios, in which the carrier is an integer multiple of the modulator (for example, 2:1 or 3:1) will also produce tones with well-defined pitch, but the perceived pitch will be that of the modulating oscillator. Simple ratios such as 2:3, 3:4, and 2:5 will produce an apparent fundamental at frequency "1". (That is, with a carrier at 200 hz. and the modulator at 300 hz., the perceived fundmental will be 100 herz.)

We will obtain similar timbres by using nearly harmonic c : m ratios, such as 1:1.002 , 1:2.99, 1:.498, or 2:3.003 . However, the resulting sidebands will be very slightly "out of tune" (like the harmonics produced by most natural acoustic instruments), and thus will frequently be out of phase. That is, the "harmonics" will alternately come in phase, starting together on a new cycle, and go out of phase, starting some new cycles at different times. This will cause amplitude beats, which will be clearly audible, and can be seen on an oscilloscope. In general, nearly-harmonic c:m ratios are preferred to exactly harmonic ratios, because of the added "warmth" and "life" imparted to the tone by the amplitude beats.

Non-integer c:m ratios, such as 1:1.414 (the interval of a tritone) or 1:2.7, produce inharmonic partials. The resulting sounds are often unpitched, percussive, gong-like, or noise-like, depending upon the modulation depth.

Any change in the c:m ratio will produce a change in timbre. Thus, if we wish to introduce glissandos, vibratos, or random frequency deviations, such control signals must be added to the frequency inputs of both the modulating and carrier oscillators. foscili does this automatically. But in instances where we need to create our own modulating oscillator(s), we need to do it by hand, as in the following example. ex3-4 was created by the glissando F.M. instrument and score below:


 #############################################################
 soundfile ex3-4   :  F.M. Glissando Instrument : Csound Tutorial
 #############################################################

1   sr = 22050
    kr = 2205
    ksmps = 10
    nchnls = 1
2   instr 1
3   kenv linen p5,p6,p3,p7   ; amplitude envelope

4   ; Glissando control signal :
5   ipitch1 = cpspch(p4)        ; 1st pitch
6   ipitch2 = cpspch(p10)       ; 2nd pitch
8   p12 = (p12 < .001 ? .001 : p12)  ; for notes with no gliss
9                                        ; p12 cannot be 0 in expseg below
10  kgliss expseg ipitch1,p11,ipitch1,p12,ipitch2,p3-(p11+p12),ipitch2 ; gliss. envelope
11      display kgliss,p3            ; let's see how it looks

12  ; Scale the FM index : the lower the pitch, the higher the index
13     k3 = octcps(kgliss)    ; convert cps to oct for scalar
14     kscale = (18-k3) * .1  ; index scalar ; c4 = 1., c3=1.1, c5 = .9 etc
15  kscale = kscale * kscale  ; now c4 =1., c3 = 1.21, c5 = .81 etc

16  ; F.M.
17    amod  oscili p9*p8*kgliss, p8*kgliss, 100     ; modulating oscillator
18    amod = kscale*amod  ; now scale the fm index
19  acar  oscili kenv,  kgliss + amod , 100   ; carrier oscillator
20  out acar
21  endin
--------------------------------------------------------------

< score11 input file used to create  soundfile  "ex3-4"
* f100 0 1024 10 1.;     < sine wave
i1 0 0 3;
p3 nu 3/4/5.5;
p4 no a3//d7;
p5 8000;                  < amplitude

Eastman Csound Tutorial


p6 .15;                   < attack time
p7 nu .7//  1.2;          < decay time
p8 nu 2.003//  2.005;   < ratio of modulating freq. to carrier (c:m = 1:p8)
p9 nu 1.5//1.3;         < modulation index
< glissando parameters :
p10  no a3/ c5/ ef2;      < 2nd pitch
p11 nu 3./ .5/ .75;       < p4 duration
p12 nu 0/ 1.5/ 2.5;       < gliss duration
end;
--------------------------------------------------------------

Since the F.M. algorithm is used here only for audio-rate frequency modulation, and never for vibrato, we have streamlined the F.M. p-fields to make the score values more meaningful. p8 now specifies the c:m ratio (1:p8) rather than the modulating frequency, and p9 now specifies the modulation index.

A glissando envelope is created on line 10. Line 8 merely allows us to leave p-field 12 blank in our score for notes with no glissando. In order to see what the glissando shape looks like, we have included a call to display on line 11. This line could be omitted or commented out, since it does not affect the audio signal in any way.

Tones produced in the lower registers of most instruments contain a richer spectrum -- more partials -- than tones produced in higher registers. Glissandos of a perfect fifth or more are apt to sound artificial if the modulation index (and thus the timbre) remains constant. The higher pitch will sound "bright," the lower pitch "dull." In addition, we sometimes must be careful about the index level of higher pitches. Some of the sidebands will alias if the index is too high, especially at lower sampling rates, such as 22050, as in ex3-4 above.

For these reasons we have included an F.M. index scalar (control signal kscale) on lines 12-15, and have used it as a multiplier for the index on line 18. This scale will raise the index progressively as the pitch falls below middle C, and lower the index for pitch levels above middle C.

  ###################################################################
  soundfile ex3-4-2   :  The same instrument without the FM index scale
  ###################################################################

Listen to soundfile ex3-4-2, which "plays" the third note of the preceding score (ex3-4), and which has been created by the same orchestra and score files as ex3-4, except that the FM index scalar on lines 12-15 and 18 has been commented out. Without the index attenuation provided by the control signal kscale, the beginning of the high tone (d7) includes aliasing, and the low ef2 at the end of the note sounds anemic.

3.2.3. Modulation Index

So far we have used either a constant or a scaled value for the modulation index, and the resulting timbre has remained pretty much fixed, and therefore rather "dull," "artificial" or "electronic-sounding," not unlike the fixed waveform oscillatr examples in chapters 1 and 2. The real power of F.M., or of any modulation procedure, however, often comes from varying the index with an envelope to produce changes in spectral "brightness" within a tone. Typically, an F.M. index will resemble an amplitude envelope (though with lower values, of course), since our perception of "loudness", and of note articulations, accents and phrasing, often are psychoacoustic responses to changes in both amplitude and in spectrum (especially in high frequency energy). However, timbres often are more interesting if the amplitude and modulation envelopes do not coincide too precisely.

Generaly, we construct modulation depth envelopes to simulate spectral changes (changes in the percentage of high and low frequency energy) that occur within acoustic sounds. Idiophonic and membranophonic sounds (including pizzicato and piano tones), for example, generally contain the greatest amount of high frequency energy at the very beginning of the attack (the click, scrape or pop which articulates the onset of the sound). Thus, an FM index for such a sound should have a very rapid rise, or begin at a high value, and then decrease during the rest of the tone. In bowed string, aerophone and vocal tones, by contrast, higher partials typically have a longer rise time, and a more rapid decay, than lower partials.

Here is a more sophisticated F.M. instrument, with a more flexible amplitude envelope and a second envelope that controls changes in the F.M. index:

#############################################################
soundfile ex3-5 : "Classical" FM with modulation index envelope : ECMC Csound Tutorial
#############################################################
; FM - single carrier, single modulator "classical fm" instrument
; p3 = duration ; p4 = pitch (cps or pch); p5 = amplitude
; p6 = attack time ; p7 = decay time ; p8 = atss amplitude multiplier
; frequency modulation : p9 freq. ratio of modulator to carrier
; fm index envelope values : p10 beginning; p11 after p6; p12 before p7
sr = 44100
kr = 4410
ksmps = 10
nchnls = 1

instr 38
ipitch = ( p4 > 13.0 ? p4 : cpspch(p4) )
p8 = (p8 > 9.99? p8 : p8*p5) ; p8 can be an absolute value or * p5
kamp expseg 1, p6, p5, p3-(p6+p7), p8, p7, 1 ; amplitude envelope

kfmindex expseg p10, p6, p11, p3-(p6+p7), p12, p7, .3*p12 ; fm index envelope

a1 foscili kamp, ipitch , 1, p9, kfmindex , 100
out a1
endin
--------------------------------------------------------------
< score11 input file used to create soundfile "ex3-5" * f100 0 1024 10 1.; < SINE WAVE i38 0 0 4; p3 4.; du 304; < du = Duration p4 no ef3; < p4 = frequency : use notes,pch or cps < amplitude envelope p5 nu 10000/1500/10000//; < p5 = amplitude p6 nu .05/ 1. / .08/ .05; < p6 =Attack time p7 nu 3.8 /.2 / 3.4 / 3.6; < p7 = Decay time p8 nu .5 /7.0 / .6 / .4 ; < atss - amplitude before p7 < absolute or *p5 if < 10.00 < F.M. values : p9-p13 p9 nu 1.005/1.996/1.4/2.7; < modulating frequency (*p4 or fixed if > 9.99) p10 nu 7. / .1 / .5/8.; < Index at beginning p11 nu 2. / .4 /1.6/3.5; < Index after p6 p12 nu .8 / 4.5 / .5/ .8; < Index before p7 end --------------------------------------------------------------

What you are looking at here is the core of the ECMC Csound Library instrument fmod, and one of its Library example score files, fmod1. Only a few items, including subroutine control signals for "attack hardness," and a microtonal tuning option have been stripped from the code. Note, in this example, that we can change the value of any score p-field except p2 (note start time) within an orchestra file, as in the following conditional evaluation of p8:

p8 = (p8 > 9.99? p8 : p8*p5) ; p8 can be an absolute value or * p5

By this point some of you probably have had enough of FM, thank you very much. But for those who would like to try some extensions to "classical FM" procedures, Appendix 2 provides examples of multiple carrier and multiple modulator algorithms, through which one can create more complex timbres that, unlike simple FM sounds, don't always sound as though they have a sinus infection.

3.3 if...else constructions

if ... else constructions are logical operations within an orchestra file that enable the algorithm to evaluate a variable or expression and, based upon this evaluation or comparison, to perform one of two or more operations. One might liken such decision-making operations to forks in a path. The principal types of logical operations available within Csound are conditional statements and program control (goto) statements.

3.3.1 Conditional Statements

[ See the discussion of CONDITIONAL VALUES in the Csound reference manual ]

Our instruments thus far have assumed that the frequency argument to oscillators will be given in the score either in cps or in pch format. Suppose, however, that we would like to be able to use both of these types of frequency input within an instrument, sometimes feeding it cps, but at other times a pch value. We could, of course, create two nearly identical copies of our instrument algorithm, one accepting cps input, the other pch. But this would clutter our directory with needless extra files. Besides, what if we want to use both cps and pch inputs within a single score?

The solution is to build a little "intelligence" into our algorithm, so that it can evaluate the input data we provide and make suitable decisions on how to interpret and operate upon this data. Conditional values enable instruments to make such decisions. The first example in the Csound manual :

(a > b ? v1 : v2)

is read by Csound as follows :

Is "a" greater than "b" ? If so, return (use) value 1 : Otherwise, return value 2.

In ex3-5 we included the following conditional variable within our algorithm to enable our instrument to distinguish between cps and pch data:

               (a    >   b      ?   v1   :  v2)
      ipitch = (p4   >   13.0   ?   p4   :  cpspch(p4)  )
      asound oscili kenv , ipitch , 100

The highest pitch class C on the piano (4186 herz) is specified as 12.00 in pch notation. It is highly unlikely that we ever would want to call for a pitch more the one octave higher than this note (13.00 in pch notation, or 8372 herz). Likewise, we cannot hear fundamental frequencies below about 20 herz, so it is uncommon to call for a fundamental frequency below 13.0 herz.[3]

[3] However, using subaudio fundamentals with rich waveforms (containing higher partials that are in the audible range), can produce many intriguing types of pulsating, throbbing and motoristic sounds. (See ex4-4.)

In the example above, therefore, we define the variable ipitch as follows :

If p4 is greater than 13.00, assume that it is already in cps format, and assign this value to the variable ipitch. However, if p4 is less than 13.0, assume that it is in pch format. Convert this p4 value to cps, and assign the resulting cps value to ipitch.

Note that the whole evaluation operation to the right of the "=" (assignment) sign must be enclosed in parentheses. The parentheses tell the compiler to do nothing until the entire expression within the parentheses has been evaluated.

In addition to the > ("greater than") sign, the other comparison symbols that can be used in such evaluations are :

               <       less than
               >=      greater than or equal to
               <=      less than or equal to
               ==      equal to
               !=      not equal to

Here's another example :

irise = (p6 > 0 ? p6 : abs(p6) * p3
idecay = (p7 > 0 ? p7 : abs(p7) * p3
kenv expseg 1, irise , p5 , p3 - (irise + idecay) , .5* p5, i3, 1
a1 oscili kenv , i1, 1

Here we tell the compiler to evaluate the p6 (rise time) and the p7 (decay time) score values. If p6 is positive (greater than 0), use this value for attack time. If p6 is negative (less than 0), however, use the absolute value of p6 (ignore the minus sign) as a multiplier for (or decimal percentage of) p3. Thus, a p6 value of -.33 would return a value of .33 * p3, or an attack time lasting one-third the total duration of the note. If our score provides a p7 value of -.5, the decay would last one-half the total duration of the note. It also would be possible to mix these two types of values in our score:

p6 nu .25*3 / -.4 / .1 / -.5;

3.3.2. goto statements

[ See the discussion of PROGRAM CONTROL STATEMENTS in the Csound reference manual ]

There is another way to build conditional (alternative) operations into an instrument - goto statements. Experienced programmers of structured languages such as C and Pascal may recoil in distate at programs that bounce around like a tennis ball. Given the sometimes rather archaic syntax provided by Csound, however, such jumps can provide a serviceable, if ugly, means to implement if..else decisions and program forks. Consider the following group of statements, which define envelope values based on whether a note is "long" or "short."

   1  if p3 <= .5  goto short
   2    kenv  expseg  1,.15,12000,.10,10000,(p3-.5),6000,.25,1
   3    goto audio
   4  short:
   5    kenv  expseg 1,(.2*p3),12000,(.1*p3),10000,(.5*p3),6000,(.2*p3),1
   6  audio:  asound  oscili kenv, ipitch , 1

Here, two alternative envelopes are provided, one for notes longer than .5 seconds, the other for notes with durations shorter than, or equal to .5 second.

Line 1 is interpreted as follows : If p3 is less than or equal to .5, skip ahead in the code until the label short is found. Lines 2 and 3 are ignored in this case. The envelope created on line 5 will have exponential rise and fall segments based on percentages of the total note duration, so it will work no matter how short a note might be.

By contrast, for notes longer than .5 seconds, lines 1,2,3 and 6 will be executed. Here, the envelope is generated on line 2. Line 5 must now be skipped, else it will overwrite the kenv signal already produced on line 2. Hence, the jump statement on line 3.

Note that the destination labels short: and audio: on lines 4 and 6) must end with a colon, but this colon is not included in the goto commands on line 1 and 3.

There are four types of goto statements :

1) igoto : The jump in the program is made only dur- ing the initialization pass, when score values are filled in and i-rate variables are computed. igoto jumps are not made during "performance" passes (during the computation of samples). In the example above, if we had mistakenly used igoto instead of goto on lines 1 and 3, the jumps would have been ignored during sample passes, and we would always get the envelope on line 5.
2) kgoto : Program jumps are not made during the initialization pass, but are performed during every sample pass.
3) goto : Program jumps are observed both on the initialization pass, and on every sample pass.
4) tgoto : Used only in connection with "tied" notes in a Csound score. This is a more advanced technique beyond the scope of this tutorial.

Thus, a program control evaluation and jump may be made

1) only during the initialization pass (igoto)
2) only during performance passes (kgoto) or else
3) on all passes (goto)

Here's an alternative coding for the example above the uses an if ... else evaluation to determine whether and where jumps in the code are to be made. The result would be identical to the previous example :

         if p3 <= .5 igoto short
                 i3 = .5
                 i4 = .1
                 i5 = p3 - .5
                 i6 = .25
                 igoto env
         short:  i3 = .2*p3
                 i4 = .1*p3
                 i5 = .5*p3
                 i6 = .2*p3
         env: kenv expseg 1, i3, 12000, i4, 10000, i5, 6000, i6, 1

Actually, this coding is much more efficient, and thus preferable, to the earlier example. The reason is computation time. In the earlier example, the if .. else statement on line 1 ("is p3 <=.5?") must be evaluated on every control pass. If the k-rate is 2205, and a note lasts 5 seconds, this decision must be made 11025 times, and the "answer" will always be the same! In the second example, the decision need only be made once.

3.3.3. Scaling Values By Register

When we need to specify more than two options, multiple program control ("if...else") statements become necessary. Here's an example, based on a common problem. Low pitched tones generally require more amplitude than "middle range" pitches to sound "equally loud." However, very high tones, above 1 kHz or so, also may require some boost. Thus, if we specify a constant p5 amplitude of 10000 for several notes, very low and, perhaps, very high tones are apt to sound "softer." We don't want to have to consider this every time we create a new score for our instrument. We'd like the instrument to make amplitude adjustments based on pitch, relieving us of this drudgery. Here is one way in which this could be accomplished :

  1  ipitch = (p4 < 13 ? cpspch(p4) : p4)
  2     i2 = octcps(ipitch); middle c  =8.0, c5 = 9.0, c3 = 7.0, etc.
  3     i3 = (18 - i2)*.1  ; scalar:  "  "  = 1.0, c5 = .9 , c3 = 1.1, etc.
  4  if i2 > 10.0 igoto veryhi
  5  if i2 < 6.0 igoto verylo
  6  iscale = (i2 < 8.0 ? p5*i3 : p5); scale the amplitude value for 
                                     ; notes between c2 & c6
  7  igoto ready
  8  veryhi:
  9    iscale = p5/i3  ; boost amplitude progressively for highest tones
  10   igoto ready
  11 verylo:
  12   iscale = i3*i3*p5  ; extra amplitude boost for very low notes
  13 ready: kenv expon 1 , p6, iscale ,p3-p6, 1; attack & decay envelope
  14 a1 oscili kenv, ipitch, 100

On line 2 we declare a variable i2, which converts the frequency of the note (ipitch) to octave decimal notation. Line 3 then creates another variable, i3, which will act as a scalar. The lower the pitch, the higher the value of i3. If we were to multiply every p5 (amplitude) score value by i3, like this :

iscale = i3 * p5

then the lower the note, the greater the value we would get for variable iscale, which we could use for our final, scaled peak amplitude value. This might work reasonably well for lower notes, but likely will not work very well for very high notes, where we want to increase the amplitude somewhat. Furthermore, we might well find that for pitches in the lowest register of the piano, we need more gain than the scalar iscale provides. This amplitude scaling problem is turning out to be more complex than we at first envisioned!

The example above attempts to solve all of these problems.

1) Line 6 : Notes between c4 and c2 receive progressively larger scaled amplitudes, while notes between middle C and two octaves above it receive the straight p5 amplitude value, with no alteration.
2) Line 9 : Notes above c6 are boosted by using our i3 scalar as a DIVISOR rather than multiplier.
3) Line 12 : For contrabass tones we double the boost provided by our i3 scalar.

This may seem like a lot of work to solve a single problem in one instrument. The amount of added Csound computation time, however, is trivial, since all of the if...igoto evaluations are made only once, during the initialization pass. If our amplitude scaling routine works well, we can create any number of scores for the instrument and never have to worry again about balance considerations for notes in different registers. And, we can copy this amplitude scaling module of code into other instrument algorithms.

Timbral scaling is often even more important than amplitude scaling in achieving a "well-modulated" instrument. In general, the lower the pitch, the greater should be the strength of the higher partials. This acoustic principle holds true for most orchestral instruments. A flute, for example, produces somewhere between eight and ten significant partials for notes around middle C, but only about four significant particals for notes a couple of octave higher.

With some instruments, scaling by register of other parameters, such as vibrato or tremolo depth, is also helpful. One reason that some synthesizer and sampler "patches" (timbres) work well only within a restricted portion of the keyboard range (say, over an octave or two), and sound progressively more artificial, flaccid or irritating as one approaches the keyboard extremes, is that constants are used for such parameters. Since the architecture of the synthesizer is fixed - decided by engineers in Japan, Korea or California, then etched onto silicon chips - and there is no way to make these values dependent on other values.

3.4 Reading in Soundfiles with tablei, phasor and gen1

The instrument algorithm in ex3-6 employs conditional values and program control jumps. The algorithm is more complicated than most of those we have presented so far, so take a deep, cleansing breath, and if the going gets tough take a break for a glass of fine French wine (or perhaps a Hersey bar) If you understand the overall logic of this algorithm and its score p-fields you will be able to use it, or to use it as a model for an algorithm of your own design, even if you do not understand the full inner workings of each line of code.

This algorithm is designed to read in monophonic soundfiles (stereo input soundfiles also could be used, but all stereo channel separation would be lost) and to provide us with the following processing (sound modification) options:

The algorithm also spares us the necessity figuring out an accurate output duration, a chore that could become irksome when pitch transpositions are applied and only a portion of an input soundfile is used. We can supply the algorithm with dummy p3 durations in our score file. Any p3 value other than 0 will do, because it will be changed by the instrument, which will calculate the correct output duration based upon the pitch tranposition and the points within the input soundfile at which we begin and end reading samples.

The key players within this instrument are unit generators tablei and phasor. tablei reads in the values from a function table loaded into RAM, and, like oscili, interpolates between consecutive input sample values to produce better audio signal resolution. phasor provides a moving pointer to locations within this table. Its output tells tablei which value within the table to use in calculating each output sample. To read the samples from an input soundfile into this RAM function table, we employ a call to function generating subroutine GEN01 (which is designed for exactly this purpose) within our score file. In ex3-6 we will employ the beloved sflib/x soundfile voicetest, a stirring spoken recitation of the immortal text:


   "The only significance of this soundfile is as it is post-processed by various 
                                       .885                1.535                     2.46   
   orchestra Library instruments, so that it becomes more interesting."
                                                                        6.263  6.49   
The timings at which certain syllables occur within the input soundfile are:
     of        .885
     file     1.535
     post     2.46
     comes    6.263
     more     6.49
Compare these with the p7 and p8 values in the score for ex3-6.

On the ECMC systems the easiest way to create a gen1 function definition of this soundfile for inclusion within a score11 input file is with the local script

        mksffuncs     ("makesoundfile function definitions").

Typing:     mksffuncs   voicetest
will return the gen1 function definition statement below, which we have copied into the beginning of our score11 input file:
 (f#  time     size  gen01      filcod                   skiptime format channel) 
* f1 0 524288 -1 "/sflib/x/voicetest" 0     0     0 ; < dur = 7.27

The -1 (minus 1) call to gen1 in the fourth argument above tells gen 1 not to normalize the samples when loading them into the table. Had we placed a 1 instead of -1 in this argument, gen01 would have rescaled the sample values so that the highest input sample value in the table would have been maxamp (floating point 1. or integer 32767). This would require some processing time. If our input soundfile already has a fairly high peak value, there is no no need for us to normalize the soundfile everytime we run a Csound compile job with this score.

(Note: gen01 also is used to create function definitions for input soundfiles read by Csound unit generator loscil.)
(Note also that the largest permitted table size, 16,777,216, will hold a 44.1k mono soundfile of 380.436 seconds, or about 6 minutes and 20 seconds duration.)

ex3-6 reads in fragments from voicetest four times, sometimes forwards and sometimes backwards, with various pitch transpositions. Our orchestra file begins with comments that explain the purpose of each score p-field. We have not numbered these initial comments or the header, but have included line numbers for the instrument body for quick reference in the following discussion.


;  ########################################################
;  soundfile ex3-6   :  reading in soundfiles with tablei  
;  #######################################################
; Orchestra file used to create this soundfile example:
-----------------------------------------------------------
sr=44100
kr=2205
ksmps=20
nchnls=1


 ; p4 : gen1 function number
 ; p5 : amplitude  {.001 - 10. = multiplier ; 10.1 - 32767 = new raw amplitude}
 ; p6 = length of input soundfile, in seconds or samples
 ; p7 : time in input soundfile to BEGIN reading samples
 ; p8 : time in input soundfile to END reading samples
 ; p9 : pitch multiplier
 ; p10 : stereo pan location {for stereo only}

1   instr 1
2      ;  init values : --------------
3   isound   = p4  ; number of gen1 function table of source soundfile
4   iamp    =  (p5 = 0 ? 1. : p5) ; amplitude multiplier; 0 = no attenuation or boost
5   ilength = (p6 < 400 ? p6 * sr : p6) ; length of input soundfile, in seconds or samples

6   ; p9 specifies pitch transposition ratio of output to input pitch levels
7   itransp	 = (p9 = 0 ? 1. : p9 )

8   iphspeed  = sr/ilength  ; phasor speed to read soundfile at original pitch
9   isfdur   = ilength/sr ; duration in seconds of input soundfile

10   ; p7 & p8:
11   ; if positive, indicates start or end time in seconds
12   ; if negative between -.001 and -1., indicates start or end
13   ; time as % of total duration of input soundfile

14  istart = (p7 < 0 ? abs(p7) *isfdur : p7) ; time in seconds to begin
15                  ; reading in samples from the input soundfile
16  iend = (p8 < 0 ? abs(p8) *isfdur : p8) ; time in seconds to end
17                  ; reading in samples from the input soundfile

18  ioutdur = abs(iend - istart) / itransp ; compute actual output duration
19  p3 = ioutdur ; change score p3 value to computed output duration for note
20  print istart, iend, itransp, ioutdur

21  if iend < istart goto backwards
22  ; --- read soundfile FORWARD:
23       index = istart/isfdur  ; beginning index into gen 1 table
24       apointer  phasor   iphspeed*itransp   ;transpose pitch by p9
25       asound  tablei   (index + apointer)*ilength, isound  ;read the table at current phasor index 
26            goto gotsound
27  backwards: ; --- read soundfile BACKWARDS beginning at p8 time
28          index = iend/isfdur  ; beginning index into gen 1 table
29          ibacklength = p3 * itransp * sr
30          ibackspeed = sr/ibacklength
31          apointer  phasor  ibackspeed * itransp
32          apointer = 1. - apointer
33          iratio = ( ibacklength/ilength)
34          apointer = apointer * iratio
35          asound   tablei  (index + apointer) *ilength, isound 

36  gotsound:  ; choose mono or stereo out
37       ; mono out:
38  out iamp * asound
39       ; stereo out
40  ;    outs sqrt(p10) * asound, sqrt(1. - p10) * asound
41  endin

-----------------------------------------------------------
< score11 file used to create soundfile example  "ex4-1" :
 < score file used to create ECMC Csound Tutorial example ex3-6:
* f1 0 524288 -1 "/sflib/x/voicetest" 0 0 0 ; < dur = 7.27

i1  0 0 4;
p3  nu 2/3/4;
du  301.;
   < p4 = gen1 function number 
p4  1;

   < p5 = output amplitude ;default 0 = same amplitude as input soundfile
p5  nu .15/.4/.65/.9;

   < p6 = length of input soundfile, in seconds  por in samples
p6 nu  7.297/320574;  < 7.297 seconds, 320574 samples
   < p7 = skip time into gen1 func : if positive p7  = skip time in seconds
      < if negative between -.001 and -1. p7 = % of soundfile skipped
  < p7 = stime in input soundfile to START reading samples
  < p8 = stime in input soundfile to END reading samples
  < if negative between -.001 and -1.p7 or p8  = % of total soundfile duration
p7  nu 0/1.535/2.46/-1; 
p8  nu .885/0/6.26/2.46; 

   < p9 = pitch multiplier {default 0 = no pitch change}
p9  nu 1/.9/1.122/.84;  
   < for stereo output only p10 = pan location {1. = hard left, 0 = hard right}
< p10  
end; 

p4 in our score provides the number of the function table with the input sound samples to be read by tablei. This variable becomes the init variable isound on line 3 of the instrument. p5 provides an amplitude multiplier (adjustment) for the input samples, primarily so that we can achieve the desired balance between two or more input soundfiles or output "notes."

In order to calculate the actual output duration and the phasor speed the instrument algorithm needs to know the duration of the complete input soundfile, which must be provided in p6, expressed either in seconds or else, if we prefer, in number of samples. On line 5 of the instrument code the algorithm evaluates our p6 argument:

  ilength = (p6 < 400 ? p6 * sr : p6) ; length of input soundfile, in seconds or samples
If p6 is less than 400, it is presumed to be the input duration in seconds, otherwise the input duration in samples. (Why use 400? Because 380 seconds is the longest possible duration table duration that can be created by gen01, and it is highly unlikely we would ever want to read in only 400 samples from a soundfile.) In our score file, for didactic purposes, we have alternately provided the input duration of voicetest in seconds and in samples, just to show that these two values produce an identical result.

p7 and p8, which respectively become the init variables istart and iend on lines 14 and 16 in the instrument, specify the times within the input soundfile to start reading in samples (p7) at the beginning of each output "note," and the time within the input soundfile to end reading in the samples (p8). Positive values specify these start and end read times in seconds, while negative values between -.001 and -1. specify these times as percentages of the total duration of the input soundfile. For example, if we wanted to begin reading in samples exactly 25 % of the way through the input soundfile, and stop the reading of these samples exactly 82 % of the way through the input soundfile, we could use these arguments:

p7 -.25:
p8 -.82;
A p7 or p8 value of 0 specifies the beginning of the input soundfile, while a value of -1 specifies the end of the input table samples. If p7 (istart) is less than p8 (iend), the input samples will be read in backwards from the p7 to p8 points within the input soundfile.

The p7 and p8 values provided in our score:

p7 nu 0 / 1.535 / 2.46 / -1;
p8 nu .885 / 0 / 6.26 / 2.46;
produce the following results:

p9 in our score specifies a pitch transposition ratio. In this case:

p9 nu 1 / .9 / 1.122 / .84;
the first note will not be transposed, the second note will be shifted down a major second (.9), the third note will be transposed up a major second (1.122) and the final note will be shifted down a minor third (.84).

On lines 37-40 of our instrument we have provided alternative mono and stereo outputs. To change this instrument to stereo, we need merely change the nchnls argument in the header from 1 to 2, comment out line 38 with a semicolon, remove the semicolon comment from line 40, and then remove the < comment from p10 in our score file and fill in the p10 pan values we desire.


Assignment

From this point on your work with Csound should be divided into two tracks:

1) Create short orchestra and companion score files that try out the major new resources discussed within each chapter of this tutorial. For this chapter you should explore:

2) You should also begin more concentrated work on two or three more complex instruments, which will be useful to you in your compositional work in the studio. Step-by-step, incorporate additional features into these algorithms to make them more powerful and flexible, and incorporating refinements discussed in subsequent chapters of this tutorial.

Tip: When you want to add a new feature to an instrument or orchestra, make a copy of the orchestra file, and edit the copy. Always keep the previous (working) version of the orchestra file on hand, in case you don't like the results of your changes.


Eastman Csound Tutorial: End of Chapter 3

NEXT CHAPTER (Chapter 4) --   Table of Contents
CHAPTER 1  --  CHAPTER 2  --  CHAPTER 3  --   CHAPTER 4  --  CHAPTER 5  --  CHAPTER 6  --  APPENDIX 1  --  APPENDIX 2