Sampling and Spectral Processing
Due Monday, March 27, 2023 by 11:59PM Eastern.
Peer Grading due on Monday, April 3 by 11:59PM Eastern.
Preliminaries
This project includes an exercise on sampling, and exercise on spectral processing, and a composition using spectral processing.
Example code and sounds are included in this download zip file for Project 5.
Sampling Exercise
Find a sound to loop from freesound.org or record a sound, e.g. your singing voice with Audacity or your favorite audio editor. You should look for a sustained musical tone (anything with pitch and amplitude changes will be harder to loop). Save it as p5src.wav
. (Note: You may use AIFF files with extension .aiff
any place we ask for .wav
files.) Your goal is to find loop start and end points so that you can play a sustained tone using Nyquist’s sampler()
function. Create a file named p5sampler.sal for your work.
After making sounds with the sampler
function, multiply the result of calling sampler
by a suitable envelope, and encapsulate your sampler code in a function named mysampler
with at least the keyword parameter pitch:
and no positional parameters. Remember that the envelope should not fade in because that would impose a new onset to your sample, which should already have an onset built-in. To get an envelope that starts immediately, use a breakpoint at time 0, e.g., pwl(0, 1, ..., 1)
.
Make a sound file with two 2-second-long tones (your loop must be much less than 2 seconds long). Your final output, named p5demo.wav
, should be generated in Nyquist, using the expression:
seq(mysampler(pitch: c4), mysampler(pitch: f4)) ~ 2
Turn in your source code as p5sample.sal
. This file should only define mysampler
and any other functions you wish. Do not automatically run code or write files when the code is loaded.
Hints: Recall that sampler()
takes three parameters: pitch
, modulation
, and sample
. pitch
works like in osc()
. modulation
works like in fmosc()
. Please use const(0, 1)
, which means “zero frequency modulation for 1 seconds.” (You can then stretch everything by 2.) The sample is a list: (sound pitch-step loop-start)
where sound
is your sample. If you read sound
from a file with s-read()
, you can use the dur:
keyword to control when the sound ends, which is the end time of the loop. pitch-step
is the nominal pitch of the sample, e.g., if you simply play the sample as a file and it has the pitch F4, then use F4
. loop-start
is the time from the beginning of the file (in seconds) where the loop begins. You will have to listen carefully with headphones and possibly study the waveform visually with Audacity to find good loop points. Your sound should not have an obvious buzz or clicks due to the loop, nor should it be obviously a loop – it should form a sustained musical note. While in principle, you can loop a single cycle of the waveform, in practice, you will find it is very difficult to find loop points that sound good. Therefore, it is acceptable to use a much longer sample, perhaps in the range of 0.2 to 0.5s. It may be tempting to use a loop longer than 0.5s, but for this assignment we must hear at least a few loops in the 2s notes you create, so the start time must be less than 0.5s and the sample_duration – starttime (= loop duration) should also must be less than 0.5s.
Warmup Spectral Processing Exercise: Cross-Synthesis
In this part, you will be experimenting with spectral processing. Spectral processing means manipulating data in the spectral domain, where sound is represented as a sequence of overlapping grains, and each grain is represented as a weighted sum of sines and cosines of different frequency components (using the short-time discrete Fourier transform and inverse transform for each “grain” or analysis window).
We have provided you with the spectral-process.sal
and spectral-process.lsp
files to help you get started. See that file for documentation and run the examples of spectral processing. Example 4 in spectral-process.sal
gives an example of spectral cross-synthesis. The idea here is to multiply the amplitude spectrum of one sound by the complex spectrum of the other. When one sound is a voice, vocal qualities are imparted to the other sound. Your task is to find a voice sample (this could be you) and a non-voice sample and combine them with cross-synthesis. You may use the example 4 code as a starting point, but you should experiment with parameters to get the best effect. In particular, the len
parameter controls the FFT size (it should be a power of 2). Modify at least one parameter to achieve an interesting effect. Larger values give more frequency resolution and sonic detail, but smaller values, by failing to resolve individual partials, sometimes work better to represent the overall spectral shape of vowel sounds. Furthermore, it matters which signal is used for phase and which is reduced to only amplitude information. Speech is more intelligible with phase information, but you might prefer less intelligibility. You will find several sound examples on which to impose speech in example 4 (look in there for commented options), but you should find your own sound to modulate. Generally, noisy signals, rich chords or low, bright tones are best – a simple tone doesn’t have enough frequencies to modify with a complex vocal spectrum. Also, signals that do not change rapidly will be less confusing, e.g., a sustained orchestra chord is better than a piano passage with lots of notes. Turn in your code as p5cross.sal
and your two input sounds as p5cross1.wav
and p5cross2.wav
.
Composition with Spectral Processing
Create between 30 and 60 seconds of music using spectral processing. You can use any techniques you wish, and you may use an audio editor for finishing touches, but your piece should clearly feature spectral processing. You may use cross-synthesis from the previous section, but you may also use other, techniques including spectral inversion, any of the examples in spectral-process.sal
, or your own spectral manipulations. While you may use example code, you should strive to find unique sounds and unique parameter settings to avoid sounding like you merely added sounds to existing code and turned the crank. For example, you might combine time-stretching in example 2 with other examples, and you can experiment with FFT sizes and other parameters rather than simply reusing the parameters in the examples. Hint: While experimenting, process small bits of sound, e.g., 5 seconds, until you find some good techniques and parameters. Doing signal processing in SAL outside of unit generators (e.g., spectral processing) is very slow. With longer sounds, remember that after the sound is computed, the Replay button can be used to play the saved sound from a file, so even if the computation is not real-time, you can still go back and replay it without stops and starts. Turn in your code in p5comp.sal
and your final output composition as p5comp.wav
. In p5comp.txt
, describe the spectral processing in your piece and your motivation or inspiration behind your composition. Finally, include your source sounds in p5comp/
.
Submission
Please hand in a zip file containing the following files (in the top level, not inside an extra directory):
- sample source
p5src.wav
- sampler demo, about 4 seconds long:
p5demo.wav
- sampler code:
p5sampler.sal
- cross-synthesis exercise code:
p5cross.sal
- cross-synthesis source sound 1:
p5cross1.wav
(input, 3-6s) - cross-synthesis source sound 2:
p5cross2.wav
(input, 3-6s) - cross-synthesis output sound:
p5cross.wav
(output, 3-6s) - 30-60s composition with spectral processing:
p5comp.wav
- composition code:
p5comp.sal
- composition descriptive text:
p5comp.txt
- source sound files used in your composition: in directory
p5comp/