Project 7 will be an ambitious computer music composition that you create by combining techniques we have explored during the semester. Your goal is to create a composition that is technically ambitious (combining at least four major techniques in Nyquist), and is a compelling listening experience (visit Jesse Stiles’ Spotify playlist for inspiration). As always, you may optionally use DAWs such as FL Studio, Ableton, Logic, etc. to mix, edit, master, etc. — but we expect you to focus your work on Nyquist programming techniques covered in the course. Your piece may, of course, be created using Nyquist alone.
Your composition will be presented at one of the May 7 concerts via Zoom. Unlike the Media Lab which has support for up to 8 channels of surround-sound audio and video projection, Zoom only supports stereo. So please limit your piece to 2-channel audio for this semester.
Assessment will based on:
Submit your audio file as andrewid_p7_comp.wav
. (For example, acarnegi_p7_comp.wav .) It must adhere to these requirements:
If you compose a longer piece and feel that cutting it down to 120 seconds significantly reduces its quality, you may also submit an extended version, which will be taken into consideration in grading.
In andrewid_p7_slide.pdf
, submit a slide, in the form of a one-page landscape PDF, to be projected while your piece is played. You should include an image in the slide that relates to your music and include the title of your piece, your name, and any additional text you wish. The image can come from anywhere, and does not need to be original work. Your complete program notes will be printed in the program, so you do not need a lot of text on the slide. Hopefully, the slide will add an engaging visual aspect to the concert, so please strive for an aesthetic, artistic presentation.
If you wish to create “live visuals” to accompany your piece rather than a slide that would be a great addition. For example, you can have a slide sequence that auto-advances while we play your composition. If you do this, please make a note for us so we can test your slides before the concert.
Include the answers to the following questions in andrewid_p7_answers.txt
:
You should also submit the following in andrewid_p7_notes.txt
:
Please create a zip file named andrewid_p7.zip containing the following files (at the top level, not inside an extra directory except for the composition source files):
andrewid_p7_comp.wav
andrewid_p7_source
andrewid_p7_slide.pdf
andrewid_p7_answers.txt
andrewid_p7_notes.txt
]]>
In this project, you will explore physical models and pattern generators. Your main composition task is to generate music using physical model functions built into Nyquist. One problem you will encounter immediately is that just as a violin does not play itself, physical models require careful control to create musical sounds. The warm-up exercises are designed to explore physical models and their control.
Using clarinet-freq
(accept no substitutes!), create a sequence of clarinet tones separated by 1s of silence as follows:
F2()
should play a 1-second tone at pitch C4 with no vibrato and a breath (roughly the amplitude control) envelope that sounds natural to you.F3()
should play a 2-second tone at pitch D4 with breath envelope vibrato. (See Note A on vibrato at the end of this assignment.) Design a vibrato that you think sounds natural. You will find the clarinet model has a fairly sudden threshold below which there is no oscillation and above which the tone is fairly strong. If breath vibrato causes the breath envelope to cross this threshold, the clarinet tone may pulse on and off. Control vibrato so that this does not happen. (We hope it’s obvious by this time in the course that if you need the breath envelope to oscillate in a vibrato-like manner, you should add to your breath envelope a low-frequency sine, e.g. with lfo
, scaled appropriately, and perhaps multiplied by another envelope if you do not want the full lfo
amplitude from the beginning. Wikipedia has a fine discussion and sound examples if you need to learn about vibrato, but note that here we are “vibrating” the breath and not using frequency vibrato.)F4()
should play a 5-second tone at pitch E4 that has a slow crescendo. The sound should start with a noisy breathy sounding attack that barely has any pitch. Within 1 second, the pitch should be clear. The crescendo should obviously continue until at least 4 seconds. You will find the clarinet model is very sensitive to the breath envelope and there is a very narrow range of envelope values over which a crescendo takes place. You will find that only a small amount of crescendo is possible with this model. You will need to determine good envelope values experimentally. (See Note B on RMS at the end of this assignment for more tools.)F5()
should play the 8-second sequence (F4, G4, A4, rest, F4, G4, A4, rest) where elements have IOI’s of 1 second. The first 3 “notes” should be made with one call to clarinet-freq()
, using the frequency control to change pitch. (Hint: to get from F4 to G4, the frequency envelope should step up by step-to-hz(G4) - step-to-hz(F4)
. Hint 2: For the best sound, the frequency control transitions should take around 30ms rather than jumping instantaneously from one pitch to the next.) The second 3 notes should be separate calls to clarinet-freq(). Try to get a continuous sound similar to the first 3 notes by slightly overlapping F4 to G4 to A4. (Hint: Overlapping is easy if you use a score or write expressions where you can explicitly control start times and durations as opposed to using the seq() construct. If you want to use seq()
, you can still get overlap if you set the logical stop time of sounds to be 1s but make the actual duration longer. See set-logical-stop
in the Nyquist Reference Manual.)F6()
should run one instance of score-gen
with pattern generators to create a score named p6score
. The score should have 4 contiguous sections. In each section, 20 pitches are generated by randomly selecting a pitch (score event attribute name pitch:
) from the scale C4, D4, E4, F4, G4, A4, B4. These 20 pitches are transposed by -12, 0, or 12, the amount being chosen at random. In other words, in each section there will be 20 pitches from the 3rd, 4th or 5th octave. Each note should have a duration and ioi of 0.2 seconds, for a total duration of 0.2 * 20 * 4 = 16 seconds. Finally, the piece should never generate 2 consecutive sections in the same octave. (The :max 1
attribute can be used for each item in make-random
to prevent a repeated selection. See Note C at the end of this assignment for an example.) F6()
should not play p6score
: You can use exec score-play(p6score)
to play your score, and if you like, define F7()
to do this with one mouse click in the IDE. The sound should be incorporated into p6warmup.wav
as described in the next paragraph.Put your code (for all of these tones and sequences) in p6warmup.sal
. Concatenate all the sounds with some silence separating them in p6warmup.wav
.
Use score-gen
with pattern generators to algorithmically compose a piece for physical models. You can use any of the physical models in Nyquist: clarinet
, sax
, flute
, bowed
, mandolin
, wg-uniform-bar
, wg-tuned-bar
, wg-glass-harm
, wg-tibetan-bowl
, modal-bar
, and sitar
. (Acknowledgment: these unit generators are ported from Synthesis Tool Kit by Perry Cook and Gary Scavone.)
Your piece should have multiple pattern generators, including nested pattern generators to get variation at more than one time scale. For example, if you have a cycle of pitches, you could add an offset to the cycle on each repetition so that you hear the melodic cycle transposed each period. Alternatively, you might generate small random pitch intervals at the small time scale and have the average pitch slowly drift up or down at a larger time scale. You can use make-window and make-repeat to further manipulate streams of data.
You should also have at least two “voices” as opposed to a single melodic line. Consider panning, reverberation, other effects, and careful mixing to make your composition musical and interesting.
Write a statement about the intention of your composition in p6comp.txt
. In p6comp.txt
, also describe how you used pattern generators to achieve your results and how you used nested patterns. (Your p6comp.sal
should also be commented to make your algorithms understandable.)
Duration should be between 45 and 60 seconds. Hand in the following files:
p6comp.sal
– the code.p6comp.wav
– the sound file.p6comp.txt
– a short statement of your intention in the composition.
To get you started on some advanced pattern generation, here are some interesting examples to study:
make-copier(make-cycle({24 36 48 60}, for: 1), repeat: 7)
make-sum(long-term-pattern-generator, short-term-pattern-generator)
make-line({5 2})
pattern returns 5, 2, 2, 2, …, and make-accumulate
sums the series to get 5, 7, 9, 11, …make-accumulate(make-line({5 2}))
When all else fails and you really want a specific computation, you can use make-eval()
to invoke a function to compute a stream of numbers. In this example, a custom function, myfunc
, does some computation and returns a value to incorporate into a pattern stream. mypat
uses make-eval
to call myfunc
.
define myfunc() return real-random()
set mypat = make-eval({myfunc})
Of course, myfunc()
could also access and modify data from another pattern. The simple way to do this is using a global variable since make-eval does not have any way to accept parameters and SAL cannot construct closures.
(.wav and .aif files are acceptable)
F2()
, F3()
, F4()
, F5()
, F6()
Some might say clarinets should not use vibrato. Eddie Daniels has a nice discussion and demonstration of clarinet vibrato, including breath vibrato created mainly by variations in air pressure, and frequency or lib vibrato, which varies the fundamental frequency while keeping the amplitude and breath pressure more or less constant. See Vibrato On The Clarinet?! with Eddie Daniels.
The RMS function in Nyquist computes the “root mean square” or average power in a signal. Technically, RMS computes the answer to the following question: If instead of an irregular, oscillating signal, I wanted to substitute a single, constant amplitude value and obtain the same overall power, what value would I use? RMS is a good way to estimate an amplitude envelope from a signal.
RMS takes three parameters: (1) the signal to analyze, and (2) the analysis rate, (3) the window size. By default, the analysis rate is 100Hz, meaning that RMS computes an amplitude value for every 10ms (1/100 s) of the signal, and the window size default is 10ms, meaning that each amplitude value corresponds to the average power in that 10ms region of the input signal.
In “warmup” exercise 3 (function F4
), you are asked to make a crescendo. You can plot the resulting envelope of the computed signal using something like: plot rms(my-computed-signal)
. The clarinet-freq
model has a narrow range of amplitude variation (unlike a real clarinet), so you might expect to see an RMS envelope like the following:
A very musical variation of random selection is random selection where you never repeat anything from a list of selections. With make-random
, you can give weights, control repetitions and even force repetitions. See the manual for details. Here is an example of selecting A, B, or C where there are no immediate repetitions of A or B, but C is allowed to repeat:set abc-pat = make-random({{A :max 1} {B :max 1} C})
loop repeat 20 exec prin1(next(abc-pat)) end
OUTPUT: BACABACBABABCCACACBC
Notice in the output that AA and BB never occur.
This project includes an exercise on sampling, and exercise on spectral processing, and a composition using spectral processing.
Example code and sounds are included in this download zip file for Project 5.
Find a sound to loop from freesound.org or record a sound, e.g. your singing voice with Audacity or your favorite audio editor. You should look for a sustained musical tone (anything with pitch and amplitude changes will be harder to loop). Save it as p5src.wav
. (Note: You may use AIFF files with extension .aiff
any place we ask for .wav
files.) Your goal is to find loop start and end points so that you can play a sustained tone using Nyquist’s sampler()
function. Create a file named p5sampler.sal for your work.
After making sounds with the sampler
function, multiply the result of calling sampler
by a suitable envelope, and encapsulate your sampler code in a function named mysampler
with at least the keyword parameter pitch:
and no positional parameters. Remember that the envelope should not fade in because that would impose a new onset to your sample, which should already have an onset built-in. To get an envelope that starts immediately, use a breakpoint at time 0, e.g., pwl(0, 1, ..., 1)
.
Make a sound file with two 2-second-long tones (your loop must be much less than 2 seconds long). Your final output, named p5demo.wav
, should be generated in Nyquist, using the expression:
seq(mysampler(pitch: c4), mysampler(pitch: f4)) ~ 2
Turn in your source code as p5sample.sal
. This file should only define mysampler
and any other functions you wish. Do not automatically run code or write files when the code is loaded.
Hints: Recall that sampler()
takes three parameters: pitch
, modulation
, and sample
. pitch
works like in osc()
. modulation
works like in fmosc()
. Please use const(0, 1)
, which means “zero frequency modulation for 1 seconds.” (You can then stretch everything by 2.) The sample is a list: (sound pitch-step loop-start)
where sound
is your sample. If you read sound
from a file with s-read()
, you can use the dur:
keyword to control when the sound ends, which is the end time of the loop. pitch-step
is the nominal pitch of the sample, e.g., if you simply play the sample as a file and it has the pitch F4, then use F4
. loop-start
is the time from the beginning of the file (in seconds) where the loop begins. You will have to listen carefully with headphones and possibly study the waveform visually with Audacity to find good loop points. Your sound should not have an obvious buzz or clicks due to the loop, nor should it be obviously a loop – it should form a sustained musical note. While in principle, you can loop a single cycle of the waveform, in practice, you will find it is very difficult to find loop points that sound good. Therefore, it is acceptable to use a much longer sample, perhaps in the range of 0.2 to 0.5s. It may be tempting to use a loop longer than 0.5s, but for this assignment we must hear at least a few loops in the 2s notes you create, so the start time must be less than 0.5s and the sample_duration – starttime (= loop duration) should also must be less than 0.5s.
In this part, you will be experimenting with spectral processing. Spectral processing means manipulating data in the spectral domain, where sound is represented as a sequence of overlapping grains, and each grain is represented as a weighted sum of sines and cosines of different frequency components (using the short-time discrete Fourier transform and inverse transform for each “grain” or analysis window).
We have provided you with the spectral-process.sal
and spectral-process.lsp
files to help you get started. See that file for documentation and run the examples of spectral processing. Example 4 in spectral-process.sal
gives an example of spectral cross-synthesis. The idea here is to multiply the amplitude spectrum of one sound by the complex spectrum of the other. When one sound is a voice, vocal qualities are imparted to the other sound. Your task is to find a voice sample (this could be you) and a non-voice sample and combine them with cross-synthesis. You may use the example 4 code as a starting point, but you should experiment with parameters to get the best effect. In particular, the len
parameter controls the FFT size (it should be a power of 2). Modify at least one parameter to achieve an interesting effect. Larger values give more frequency resolution and sonic detail, but smaller values, by failing to resolve individual partials, sometimes work better to represent the overall spectral shape of vowel sounds. Furthermore, it matters which signal is used for phase and which is reduced to only amplitude information. Speech is more intelligible with phase information, but you might prefer less intelligibility. You will find several sound examples on which to impose speech in example 4 (look in there for commented options), but you should find your own sound to modulate. Generally, noisy signals, rich chords or low, bright tones are best – a simple tone doesn’t have enough frequencies to modify with a complex vocal spectrum. Also, signals that do not change rapidly will be less confusing, e.g., a sustained orchestra chord is better than a piano passage with lots of notes. Turn in your code as p5cross.sal
and your two input sounds as p5cross1.wav
and p5cross2.wav
.
Create between 30 and 60 seconds of music using spectral processing. You can use any techniques you wish, and you may use an audio editor for finishing touches, but your piece should clearly feature spectral processing. You may use cross-synthesis from the previous section, but you may also use other, techniques including spectral inversion, any of the examples in spectral-process.sal
, or your own spectral manipulations. While you may use example code, you should strive to find unique sounds and unique parameter settings to avoid sounding like you merely added sounds to existing code and turned the crank. For example, you might combine time-stretching in example 2 with other examples, and you can experiment with FFT sizes and other parameters rather than simply reusing the parameters in the examples. Hint: While experimenting, process small bits of sound, e.g., 5 seconds, until you find some good techniques and parameters. Doing signal processing in SAL outside of unit generators (e.g., spectral processing) is very slow. With longer sounds, remember that after the sound is computed, the Replay button can be used to play the saved sound from a file, so even if the computation is not real-time, you can still go back and replay it without stops and starts. Turn in your code in p5comp.sal
and your final output composition as p5comp.wav
. In p5comp.txt
, describe the spectral processing in your piece and your motivation or inspiration behind your composition. Finally, include your source sounds in p5comp/
.
Please hand in a zip file containing the following files (in the top level, not inside an extra directory):
p5src.wav
p5demo.wav
p5sampler.sal
p5cross.sal
p5cross1.wav
(input, 3-6s)p5cross2.wav
(input, 3-6s)p5cross.wav
(output, 3-6s)p5comp.wav
p5comp.sal
p5comp.txt
p5comp/
Due Monday, March 22 by 11:59PM Eastern
Peer Grading Due Monday March 29 by 11:59PM Eastern
IMPORTANT HANDIN NOTE: We are making a small change to project 4, in that we are asking students to include a .wav for each section of part1( ), part2( ), part3( ), and part4( ) in their submission. This is to make the peer grading process easier for everyone (and to avoid safety issues in running other students’ code directly). Please resubmit if you’ve already submitted to include the .wavs for part1( ) through part4( ).
Granular synthesis is an extremely versatile technique to create sounds closely associated with the world of electronic and computer music.
In this project, you will experiment with some of the possibilities by creating 4 directed examples and a short but open-ended composition.
We will introduce two techniques: Grains from sound files, and synthesized grains.
p4.zip
contains everything you need to get started. You might begin by loading proj4base.sal
file, which should process the included sample.wav
. See the .sal
code for documentation on how it works. As is, proj4base.sal
uses file-grain
to produce individual grains and make-granulate-score
to make a Nyquist score consisting of many grains.proj4base.sal
to proj4.sal
, which you will hand in.proj4.sal
to granulate your sound. Note how the code references sample.wav
as "./sample.wav"
. The "./"
means “in the current directory”; otherwise, Nyquist will look in the default sound file directory, so be sure to use the "./"
prefix to reference your sound file, and put your sound file in the same directory as proj4.sal
.proj4.sal
define the function part1()
that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
proj4base.sal
‘s make-granulate-score
function, but you might also like to hear a less scrambled time-stretch effect by setting randomness to 0.play part1()
should play about 3s of sound.sound
behavior makes a sound stretchable and shiftable. Grain is already in an environment with a stretch factor and the stretch factor has already been applied to compute grain, so if we just wrote sound(grain)
or sound(grain) ~ 1.5
, the current environment stretch factor would be applied again – not good. Instead, we use “absolute stretch” (~~
) to replace the stretch factor in the environment with an absolute value while evaluating stretch()
. The result is that the grain is resampled to make it longer or shorter, depending on the value of the stretch factor. Find the expression (sound(grain) ~~ (1.0 / speed)
in the function file-grain
to see how this works in the code.proj4.sal
modify make-granulate-score
to accept a speed:
keyword parameter (default value 1) that passes the speed
value through the score to calls on file-grain
so that you can control the pitch shifting of grains.proj4.sal
define the function part2()
that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
proj4base.sal
‘s make-granulate-score
function, but you might also like to hear a less scrambled time-stretch effect by setting randomness to 0.play part2()
should play about 3s of sound. The sound should of course be an octave higher (but the same duration) as in Part 1.make-granulate-score
function includes a density
parameter that gives the probability (from 0 to 1) of appending any given grain to the score as it is constructed.proj4.sal
define the function part3()
that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
play part3()
should play about 3s of sound. The sound should be about the original pitch of your sound file, but the sound should be stuttering or pulsing irregularly with short grains of sound.proj4.sal
, modify make-granulate-score
to accept another parameter to select synthesized grains. E.g. you can add a boolean parameter sinegrain
(default #f) to enable calling sine-grain
instead of file-grain
. Alternatively, you could provide the name of the grain function to use in a parameter to make-granulate-score
, e.g. grainfunc
(default quote(file-grain)
).make-granulate-score
should be different when synthesized grains are selected. The function sine-grain
is already (partially) implemented, and has two keyword parameters: low
and high
. In make-granulate-score
, you can put these parameters into the score, or you can just use the default values.sine-grain
to randomly choose the pitch to synthesize, based on the keyword parameters low
and high
.proj4.sal
define the function part4()
that will produce a granular synthesis sound using synthesized grains with the following parameters (which are otherwise the same as Part 1):
play part4()
should play about 3s of sound. The sound should consist of clean, rapid, bubbly sinusoidal sounds with a diversity random pitches.At this point, since you have modified proj4.sal
for each part of the project, you should go back and test part1()
, part2()
, part3()
and part4()
to make sure they all still work as expected. Merely loading proj4.sal
should not play sounds automatically.
Copy your proj4.sal
file to proj4comp.sal
so that you do not “break” anything in Parts 1-4.
Using all the tools that you have built, create a 20- to 40-second composition. In your piece, you should at least include:
You may wish to alter your code further to incorporate amplitude and pitch control. Pattern generators are an excellent way to get some control over variations in parameters. You should also experiment with other parameters. In fact, ALL parameters should be considered fair game for experimentation. Shorter and longer grains have a very different sound. Very high stretch factors, very small ioi, and other changes can be very interesting, so please spend some time to find amazing sounds.
You are also free to make many granular synthesis sounds and mix them in Nyquist or in an audio editor such as Audacity.
Create the plain text file proj4comp.txt to explain:
Grading will be based on meeting the technical specifications of the assignment:
part1
, part2
, part3
and part4
functions in your proj4.sal
.proj4.sal
code should not immediately compute sounds.Please hand in a zip file containing the following anonymous files (in the top level, not inside an extra directory). Note that the following must be strictly adhered to for the convenience of machine and human graders (including you as a peer grader):
proj4.sal
proj4comp.sal
, Note: Code should be clearly commented in case we want to run it.proj4comp.wav
(or .aiff
).part1()
, part2()
and part3()
will run.sample.wav
from p4.zip
! It should not be needed to run your code, and we do not need another copy from every student, thank you.proj4comp.txt
Use im
instead of imod
(edits made below).
In FM Synthesis, the depth of modulation (D) and the frequency of modulation (M) control the number of generated significant sideband pairs together, using the following formula: I = D / M, where I is the Index of Modulation (a unit-less number). We can measure depth of modulation in Hz, which means how many Hz by which the carrier frequency deviates. When D = 0, there is no sideband generated (see FM Synthesis (course notes) for more detail.)
Create an FM instrument, a function named fminstr
, in the file proj3.sal
, that takes the following keyword parameters:
pitch
: the carrier frequency in steps (a number)im
: a time-varying index of modulation (of type SOUND)You may use additional parameters to control loudness (vel:
), vibrato, C:M ratio, envelope parameters, etc. The stretch factor should allow you to control the duration of the instrument sound. The default behavior should be a tone with a fundamental frequency controlled by pitch
and where higher im
produces more and stronger harmonics.
Be sure that your instrument uses an envelope to control overall amplitude. If you run play osc(g4)
in Nyquist, you will hear an example of what your instrument should not sound like!
An example that plays the instrument is:
play fminstr(pitch: g4, im: const(0, 1)) ~ 2
In this example, the im
parameter is the constant zero with a duration of 1, so this is expected to play a boring sine tone at pitch G4.
Create a function named part1
with no parameters, in the file proj3.sal
. Running this should instantiate a single instance of your fminstr
and return the sound, which should last about as long as the stretch factor. So the command play part1() ~ 3
should play about 3s and will demonstrate that you completed Part 1. Your part1
function can pass in and demonstrate optional keyword parameters you have added.
Hint: Be sure to test part1
with different durations (stretch factors).
Demonstrate the instrument by using PWL to create an interesting Index of Modulation. In other words, replace const(0, 1) in the previous example with an interesting, time-varying Index of Modulation. You can use the envelope editor in the Nyquist IDE to draw an envelope if you like, and you can apply scale factors and offsets to the PWL function to fine tune your sonic result.
Keep in mind that if you put in a long PWL envelope (or any other envelope), your fminstr()
code will not magically deduce you want all of the other components to stretch to the duration of the envelope.
Create a function named part2
with no parameters, in the file proj3.sal
. Running this should instantiate a single instance of your fminstr
with your PWL modulation and return the sound, which should be 3 to 5s long and contain obvious changes in brightness due to a time-varying im
. So the command play part2()
will demonstrate that you completed Part 2.
For a (hopefully) more interesting composition, we are going to analyze a real sound, extract the time-varying spectral centroid, and use that to control the Index of Modulation of one or more FM sounds.
Read the documentation on spectral centroids in the accompanying files (look in the downloaded directory), and try out the project3-demos.sal
that is provided. To summarize, you call spectral-centroid()
on an input sound (or filename), and it returns an envelope that tracks the spectral centroid of the input sound.
The idea here is that when the input sound gets brighter, the spectral centroid goes up. If we use spectral centroid to control the Index of Modulation, an increasing spectral centroid should cause the Index of Modulation to increase, and therefore the FM sound should get brighter. Thus, there should be a connection between spectral variation of the source sound analyzed with spectral-centroid()
and the spectral variation of your synthesized sound.
Your technical task is to make this connection between input sound and output sound. This is a rare case where we are going to suggest assigning the spectral centroid (sound) to a global variable. If you do that, then any note can reference the spectral centroid. For example:
set *sc* = 0.003 * spectral-centroid(...)
play seq(fminstr(pitch: c4, im: *sc*),
fminstr(pitch: c4, im: *sc*) ~ 2)
This plays two notes. The first runs nominally from 0s to 1s, and it will use the first second of the spectral centroid sound to control its Index of Modulation. The second note runs nominally from 1 to 3s (the duration is 2 because of the stretch operator ~), and this note will use the spectral centroid from 1 to 3s. It is important to note that the second note does not begin “reading” the *sc*
variable from the beginning. This is consistent with the idea that, in Nyquist, sounds have an absolute start time.
*sc*
with cue(*sc*)
. The cue behavior shifts its sound parameter to start at the time specified in the environment. However, we think it will be more interesting to let the IM evolve throughout your piece and let each note access the current state of the spectral centroid as the piece is being playing.spectral-centroid
is measured in Hz and will probably range into the thousands, but reasonable values for the Index of Modulation are probably going to be in the single digits. Is 0.003 the right scale factor? No, it is just a rough guess, and you should adjust it.PROGRAMMING TASK: Your musical task is to create something interesting, with a duration of 30 to 60 seconds. Store your code for part 3 in proj3comp.sal
. We do not want to box you in to a specific procedure, but we are looking for a result that shows interesting spectral variation driven by the spectral centroid of a source sound. Some possibilities include:
lowpass8(*sc*, 10)
will remove most frequencies in *sc*
above 10 Hz, leaving you with a smoother control function. (See the project3-demo.sal
file.)PAN
function), or other effects.Grading will be based on meeting the technical specifications of the assignment:
fminstr
and part1
and part2
functions in your proj3.sal
.proj3.sal
code should not immediately compute spectral centroids or FM tones.Note that the following must be strictly adhered to for the convenience of machine and human graders (including you as a peer grader):
proj3.sal
proj3comp.sal
, Note: Code should be clearly commented in case we want to run it.proj3part2.wav
proj3comp.wav
proj3input.wav
proj3comp.txt
(If you used an audio editor to modify the output of part 3, describe how the sound was modified clearly so we can determine how to relate your part 3 results with your audio results.)All of these files must be in the top level of the zip file, not in a subfolder within the zip file. Remember to keep your submission anonymous for peer grading.
]]>Envelopes are very important in sound synthesis. Envelopes are the primary way to “shape” sounds from various sound sources so that the sounds do not suddenly pop on and off or blast at a constant level.
You should have learned to use both “pwl” and “env” for envelopes in the on-line lessons. Hint: Look up “pwl” and “env” in the Nyquist Reference Manual and pay attention to their behaviors under different stretch environments.
The following code creates a short score and plays it when you click the “F2” button in the Nyquist IDE (maybe your keyboard F2 key will work too):
variable *ascore* = {{0 1 {note pitch: c2 vel: 50}} {1 0.3 {note pitch: d2 vel: 70}} {1.2 0.3 {note pitch: f2 vel: 90}} {1.4 0.3 {note pitch: g2 vel: 110}} {1.6 0.3 {note pitch: a2 vel: 127}} {1.8 2 {note pitch: g2 vel: 100}} {3.5 2 {note pitch: d2 vel: 30}} {3.7 2 {note pitch: d4 vel: 70}} {3.9 2 {note pitch: d5 vel: 70}} {4.1 2 {note pitch: d6 vel: 50}} {4.9 0.5 {note pitch: g4 vel: 40}} {5.2 2.5 {note pitch: a4 vel: 30}}} function f2() play timed-seq(*ascore*) variable *bscore* ; to be initialized from *ascore* function f3() play timed-seq(*bscore*)
Create a file named instr.sal
, paste this code in, and try it.
Your first task is to define a new instrument, also in the file instr.sal
, that multiplies the output of a table-lookup oscillator by an envelope. The instrument should consist of
build-harmonic
function and containing at least 8 harmonics,osc
), andThe instrument should be encapsulated in a function named simple-note
that at least has the two keyword parameters seen in *ascore*
(pitch:
and vel:
). The vel:
parameter represents velocity, a loudness control based on MIDI key velocity which ranges from 1 to 127. Convert MIDI key velocity to amplitude using the vel-to-linear
function (see the Nyquist Reference Manual). Your simple-note
instrument function should satisfy these requirements:
simple-note(pitch: c2, vel: 100) ~ 2.6
should have a duration of 2.6s,vel-to-linear(vel)
, where vel
is the value of the vel
keyword parameter,pitch
keyword parameter, which gives pitch in “steps”, e.g. C4 produces middle-C.*ascore*
with your instrumentTo show off your instrument, transform *ascore*
using one of the score-*
functions to use your instrument rather than note. (It is your task to become familiar with the score-*
functions and find the one that replaces function names.) Assign the transformed score to *bscore*
at the top level of instr.sal
; for example, you might have something like the following in your code:
set *bscore* = function-to-transform-a-score(*ascore*, perhaps-other-parameters)
You may add one or more additional functions to instr.sal
as needed.
Now, F3 (as defined above) should play the altered score, and it should use your simple-note
instead of the built-in piano note function note
. The resulting sound should demonstrate your instrument playing different pitches, durations, and velocities.
Record a short period of rhythmic tapping sound (<10 sec) and save as tapping.wav
. This can be your own tapping or some other source.
Download the P2.zip
file, which contains code, a test sound, and some documentation.
You will now create a new file named tapcomp.sal
, which will use the tapping.wav
data to create a composition. In tapcomp.sal
, load the given code tap-detect.sal
to detect a list of tap times and tap amplitudes of tapping.wav
. You must use load tap-detect.sal
, e.g. do not specify any path such as load P2/tap-detect.sal
. Refer to the HTML documentation for tap-detect
and inspect the tap-detect
code to get an idea of how you can generate a score based on this code.
Add your own functions to tapcomp.sal
to map the list of tap times and amplitudes into a Nyquist score named *tapscore*
that plays your instrument from instr.sal
(so of course, you should have the statement load "instr.sal"
to define your instrument). When you are done, loading tapcomp.sal
should immediately run your code and produce *tapscore*
, which should be a direct 1-to-1 translation of the taps to a score.
NOTE 1: Do NOT use score-gen
. Instead, you must use loop and list primitives to construct a score. You may use the linear-to-vel
function to convert tap strength to a velocity parameter.
NOTE 2: Note duration is up to you. It fixed, random, or related to the tap inter-onset times.
Create between 20 and 40 seconds of music using your code. You can create new scores based on your tapscore (but please do not modify *tapscore*
), mix multiple sounds or “tracks,” you can create sounds by other means (as long as at least one “track” is derived according to the recipe above), and you can manipulate your results in an audio editor (again, as long as the tap-driven instrument is clearly heard). Keep all the code for your piece in tapcomp.sal
, but do not compute your piece immediately when the code is loaded (at least not in the version you hand in). It may be convenient to use F4
and other buttons in the IDE to run different pieces of your code. E.g. you could add something like this to tapcomp.sal
:
function F4() play compute-my-tap-piece(*tapscore*)
Do NOT write any play
command or commands to compute your piece at the top-level. E.g. do NOT write something like
play compute-my-tap-piece(*tapscore*)
unless it is inside a function (like F4 above), and the function does not get called when the file is loaded.
Name your composition sound file p2comp.wav
(AIFF format is acceptable too.)
Create a simple ASCII text file p2comp_README.txt
or pdf file p2comp_README.pdf
with a sentence or two describing your goals or musical intentions in creating p2comp.wav
.
Submit files in a zip file to ATutor. The following files should be at the top level in your zip file (you may use .aiff sound files instead of .wav sound files):
instr.sal: A SAL program implementing your table-lookup instrument,
tapping.wav: The recorded taps used as input to tap-detect.sal
,
tapcomp.sal: A SAL program to convert tap times to a Nyquist score.
p2comp.wav: Your project 2 composition (at least 20 seconds).
p2comp_README.txt: Optional additional info on your composition.
Read “The Bell Scale”, “The Decibel Scale (dB),” and “Decibels for Comparing Amplitude Levels” in Loudness Concepts and Panning Laws, but to summarize some important points:
Remember that 6 dB represents a factor of 2 in amplitude!
Here are some things you might want to check before your final submission. Some criteria will be checked automatically. Some will be the basis of peer grading:
simple-note
is as directed, accepts 2 keyword parameters, and has proper pitch, amplitude, time and duration control.*ascore*
to *bscore*
that calls simple-note
when instr.sal
is loaded.tapping.wav
contains at least 10 taps.*tapscore*
, generated by loading tapcomp.sal
, should be a valid Nyquist score. (You can use score-print
to see it nicely formatted.)Fade In, Fade Out, Cross-Fade
You should be familiar with the terms fade in, fade out, and cross-fade. As a reminder, here’s a schematic to illustrate these terms. Remember that, when incorporating sounds into compositions, any sound without a “natural” beginning and ending should be edited with a fade in and fade out to avoid clicks and pops at the beginning and ending. When joining sounds together, one often overlaps the fade out of one sound with the fade in of the next sound, creating a cross-fade.
Project Instructions
We recommend that you create a director/folder on your computer for Project 1 and store all of your files and subdirectories/subfolders in this location. You will then zip the project folder when you are ready to submit. Do not name the folder in a way that will identify yourself.
Grading is based on the following points:
Peer-grading will be assigned after the due date and will be due Monday, February 15 by 11:59PM Eastern.
]]>Note: Although this project does not count toward your final grade, you must do this setup project in order to prepare for all subsequent projects.
play pluck(c4)
Look carefully at the output window and you will see that Nyquist saves the computed sound. Find that sound on your computer, move it to a Project0 folder and rename it to p0.wav
.p0.txt
in your Project0 folder. You can put any decent text in it, just be sure not to identify yourself.p0.wav
and p0.txt
in a zip file named p0.zip
and submit this to [CORRECTION:] http://www.music.cs.cmu.edu/icm-s21. Submission instructions are here.Grading: You will receive full autograde credit if p0.wav
is correctly computed and handed in with p0.txt
. If you fail to get Nyquist or Audacity running, report this on Piazza in a private post (but do not post anonymously or we won’t know who you are). A teaching assistant will reach out to you as soon as possible.
Peer Grading: After the grading deadline, you will be asked to grade 3 submissions from your peers. This should be very easy since there was little to do, but we want you to experience what peer grading will look like for future projects. If there are any problems, get help on Piazza so Project 1 will go smoother. Peer Grading Due Monday, February 8, 2020 by 11:59PM Eastern.
]]>