Due Oct 9, Peer Grading Due Oct 16
In FM Synthesis, the depth of modulation (D) and the frequency of modulation (M) control the number of generated significant sideband pairs together, using the following formula: IM = D / M. We can measure depth of modulation in Hz, which means how many Hz by which the carrier frequency deviates. Thus, the result of D / M is a unit-less number, called the Index of Modulation, or IM. When D = 0, there is no sideband generated (see FM Synthesis (course notes) for more detail.)
Part 1: Create an FM Instrument
Create an FM instrument, a function named
fminstr, in the file
proj3.sal that takes the following keyword parameters:
pitch: the carrier frequency in steps (a number)
im: a time-varying index of modulation (of type SOUND)
You may use additional parameters to control loudness (
vel:), vibrato, C:M ratio, envelope parameters, etc. The stretch factor should allow you to control the duration of the instrument sound. The default behavior should be a tone with a fundamental frequency controlled by
pitch and where higher
im produces more and stronger harmonics.
Be sure that your instrument uses an envelope to control overall amplitude. If you run
play osc(g4) in Nyquist, you will here an example of what your instrument should not sound like!
An example that plays the instrument is:
play fminstr(pitch: g4, im: const(0, 1)) ~ 2
In this example, the IM parameter is the constant zero with a duration of 1, so this is expected to play a boring sine tone at pitch G4.
Create a function named
part1 with no parameters. Running this should instantiate a single instance of your
fminstr and return the sound, which should last about as long as the stretch factor. So the command
play part1() ~ 3 should play about 3s and will demonstrate that you completed Part 1. Your
part1 function can pass in and demonstrate optional keyword parameters you have added.
Hint: Be sure to test
part1 with different durations (stretch factors).
Part 2: Time-Varying Index of Modulation
Demonstrate the instrument by using PWL to create an interesting Index of Modulation. In other words, replace const(0, 1) in the previous example with an interesting, time-varying IM. You can use the envelope editor in the Nyquist IDE to draw an envelope if you like, and you can apply scale factors and offsets to the PWL function to fine tune your sonic result.
Keep in mind that if you put in a long PWL envelope (or any other envelope), your
fminstr() code will not magically deduce you want all of the other components to stretch to the duration of the envelope.
Create a function named
part2 with no parameters. Running this should instantiate a single instance of your
fminstr with your PWL modulation and return the sound, which should be 3 to 5s long and contain obvious changes in brightness due to a time-varying
im. So the command
play part2() will demonstrate that you completed Part 2.
Spectral Centroid Analysis
For a (hopefully) more interesting composition, we are going to analyze a real sound, extract the time-varying spectral centroid, and use that to control the Index of Modulation of one or more FM sounds.
Read the documentation on spectral centroids in the accompanying files (look in the downloaded directory), and try out the
project3-demos.sal that is provided. To summarize, you call
spectral-centroid() on an input sound (or filename), and it returns an envelope that tracks the spectral centroid of the input sound.
project3-demos.sal uses the
reverb function, which was broken in Nyquist v3.11. Please update to v3.12 to get reverb (!), project 3 demos, and for Mac OS X 10.12 (Sierra) users, this might fix the Help:Manual and other documentation links. (Right- or Ctrl-click on command completion words to see their manual entries.)
The idea here is that when the input sound gets brighter, the spectral centroid goes up. If we use spectral centroid to control the Index of Modulation, an increasing spectral centroid should cause the Index of Modulation to increase, and therefore the FM sound should get brighter. Thus, there should be a connection between spectral variation of the source sound analyzed with
spectral-centroid() and the spectral variation of your synthesized sound.
Your technical task is to make this connection between input sound and output sound. This is a rare case where we are going to suggest assigning the spectral centroid (sound) to a global variable. If you do that, then any note can reference the spectral centroid. For example:
set *sc* = 0.003 * spectral-centroid(...) play seq(fminstr(pitch: c4, im: *sc*), fminstr(pitch: c4, im: *sc*) ~ 2)
This plays two notes. The first runs nominally from 0s to 1s, and it will use the first second of the spectral centroid sound to control its IM. The second note runs nominally from 1 to 3s (the duration is 2 because of the stretch operator ~), and this note will use the spectral centroid from 1 to 3s. It is important to note that the second note does not begin “reading” the
*sc* variable from the beginning. This is consistent with the idea that, in Nyquist, sounds have an absolute start time.
Note 1: You could align the beginning of the spectral centroid with the beginning of the note by replacing
cue(*sc*). The cue behavior shifts its sound parameter to start at the time specified in the environment. However, we think it will be more interesting to let the IM evolve throughout your piece and let each note access the current state of the spectral centroid as the piece is being playing.
Note 2: Why did we multiply spectral-centroid(…) by 0.003? The point is that
spectral-centroid is measured in Hz and will probably range into the thousands, but reasonable values of IM are probably going to be in the single digits. Is 0.003 the right scale factor? No, it is just a rough guess, and you should adjust it.
Note 3: You can use a score here rather than SEQ.
Part 3: Composition
Your musical task is to create something interesting, with a duration of 30 to 60 seconds. We do not want to box you in to a specific procedure, but we are looking for a result that shows interesting spectral variation driven by the spectral centroid of a source sound. Some possibilities include:
- Make a melody and use the spectral centroid to control modulation.
- Use your voice to produce a sound for analysis by spectral centroid.
- If the spectral centroid is not as smoothly varying as you want it to be, consider using a low-pass filter, e.g.
lowpass8(*sc*, 10)will remove most frequencies in
*sc*above 10 Hz, leaving you with a smoother control function. (See the
- Rather than a melody, use long, overlapping tones to create a thick texture.
- Use a mix of centroid-modulated FM tones and other tones that are not modulated. Maybe the modulated tones are the foreground, and unmodulated tones form a sustained background. Maybe the modulated tones are a sustained “chorus” and an unmodulated melody or contrasting sound is in the foreground.
- The FM depth of modulation can be the product of a local envelope (to give some characteristic shape to every tone) multiplied by the global spectral centroid (to provide some longer-term evolution and variety).
- Add some reverb, stereo panning (see the
PANfunction), or other effects.
- Experiment with different source sounds. Rock music is likely to have a fairly constant high centroid. Random sounds like traffic noise will probably have low centroids punctuated by high moments. Piano music will likely have centroid peaks every time a note is struck. Choose sources and modify the result with a smoothing lowpass filter (see 3 above) if needed to get the most interesting results
- You may use an audio editor for extending/processing/mixing your piece, but we must be able to hear clearly your instrument, score, and spectral-centroid-based modulation.
Grading will be based on meeting the technical specifications of the assignment:
- the code should work, it should be readable, and it should have well-chosen parameters that indicate you understand what all the parameters mean.
- be sure to define
part2functions in your
- loading your
proj3.salcode should not immediately compute spectral centroids or FM tones.
- be sure to define
- submissions are anonymous.
- Regurgitating minor variations of the sample code does not indicate understanding and may result in large grading penalties.
- It’s best to envision what you want to do, write it down in comments, then implement your design. You can refer to the sample code, but should avoid using that as a template for your project — make this your own creation.
- Your composition should be 30 to 60 seconds long.
- the FM index of modulation control should be clearly audible and clearly related to the sound that you analyzed.
- Your piece should show musical results, effective FM instrument design, novel control and composition strategies, mixing quality, and general effort. Bonus points may also be awarded for exceptional work.
Turning In Your Work
Note that the following must be strictly adhered to for the convenience of machine and human graders (including you as a peer grader):
- Code for part 1 and 2, named
- Code for part 3, named
proj3comp.sal, Note: Code should be clearly commented in case we want to run it.
- Audio for part 2, named
- Audio for part 3, named
- Input sound analyzed for part 3, named
- A short statement of your intention of the composition, named