Tom Cortina – 15-322/622 Spring 2021 https://courses.ideate.cmu.edu/15-322/s2021 Intro to Computer Music Mon, 03 May 2021 03:35:51 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.13 Project 7 https://courses.ideate.cmu.edu/15-322/s2021/project-7/ Tue, 06 Apr 2021 15:05:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=281 due Monday, May 3 by 11:59 PM EDT
NO LATE DAYS ALLOWED – HANDIN VIA CANVAS

Important dates

  • Monday, May 3, 11:59 PM Eastern: Deadline for music submission (source files, wave file, program notes, and answers to the questions below). HAND IN VIA CANVAS.
  • Friday, May 7, TIMES: TBA: Concerts. Online via Zoom.

Description

Project 7 will be an ambitious computer music composition that you create by combining techniques we have explored during the semester. Your goal is to create a composition that is technically ambitious (combining at least four major techniques in Nyquist), and is a compelling listening experience (visit Jesse Stiles’ Spotify playlist for inspiration). As always, you may optionally use DAWs such as FL Studio, Ableton, Logic, etc. to mix, edit, master, etc. — but we expect you to focus your work on Nyquist programming techniques covered in the course. Your piece may, of course, be created using Nyquist alone.

Your composition will be presented at one of the May 7 concerts via Zoom. Unlike the Media Lab which has support for up to 8 channels of surround-sound audio and video projection, Zoom only supports stereo. So please limit your piece to 2-channel audio for this semester.

Assessment will based on:

  1. Completeness – Audio requirements are fulfilled, code, slides, and text are included according to specifications below.
  2. Code – Code is included and compiles.
  3. Composition quality – Your piece should demonstrate effort exploring musical and technical possibilities and should present an interesting listening experience.
  4. Mixing – The various sounds created for your piece should be mixed with intentionality. You should control the amplitude of all sounds over time so that they are balanced and the listeners’ attention is focused to aspects of the music which you wish to highlight. Your piece should contain no common mixing errors such as clipping or clicks. If you wish to include these types of sounds intentionally for artistic effect please explain this in your program notes and explain your reasoning for why this decision was made.
  5. Significant use of Nyquist – Your piece must include at least four major techniques explored in the course, for example: granular synthesis, FM synthesis, spectral processing, pattern generation, physical models, sampling/looping, etc. In your report, you will document where each technique is used and how it is used.

Audio Requirements

Submit your audio file as andrewid_p7_comp.wav. (For example, acarnegi_p7_comp.wav .) It must adhere to these requirements:

  • Stereo file
  • WAV format
  • 16 bits, 44.1 kHz
  • No clipping
  • Must be normalized
  • Must not contain long periods of silence
  • 60-120 seconds long

If you compose a longer piece and feel that cutting it down to 120 seconds significantly reduces its quality, you may also submit an extended version, which will be taken into consideration in grading.

Slide

In andrewid_p7_slide.pdf, submit a slide, in the form of a one-page landscape PDF, to be projected while your piece is played. You should include an image in the slide that relates to your music and include the title of your piece, your name, and any additional text you wish. The image can come from anywhere, and does not need to be original work. Your complete program notes will be printed in the program, so you do not need a lot of text on the slide. Hopefully, the slide will add an engaging visual aspect to the concert, so please strive for an aesthetic, artistic presentation.

If you wish to create “live visuals” to accompany your piece rather than a slide that would be a great addition.  For example, you can have a slide sequence that auto-advances while we play your composition. If you do this, please make a note for us so we can test your slides before the concert.

Questions

Include the answers to the following questions in andrewid_p7_answers.txt:

  1. What is your motivation in this work? Give a short summary.
  2. What special efforts did you make in composing this piece?
  3. What mixing techniques you use in this work? Try to be concise.
  4. What Nyquist programming techniques did you use in this work? For each technique, briefly describe how it works in your program code.
  5. Do you have any additional comments for the graders?
  6. Do you give permission for your piece to be made available online?

Program notes

You should also submit the following in andrewid_p7_notes.txt:

  1. On the first line: the title of your composition.
  2. On the second line: the name by which you would like to appear in the program.
  3. After that: your program notes. You may use LaTeX syntax if you want to include any formatting. Keep your notes to a maximum of 200 words. 

Submission

Please create a zip file named andrewid_p7.zip containing the following files (at the top level, not inside an extra directory except for the composition source files):

  1. Your composition sound file: andrewid_p7_comp.wav
  2. Your composition source files: in the folder andrewid_p7_source
  3. Your slide: andrewid_p7_slide.pdf
  4. Answers to the above questions: andrewid_p7_answers.txt
  5. Program notes: andrewid_p7_notes.txt

SUBMISSION VIA CANVAS, NOT ATUTOR 
LOG IN TO CANVAS AND YOU SHOULD SEE A PROJECT 7 ASSIGNMENT

 

]]>
Project 6 https://courses.ideate.cmu.edu/15-322/s2021/project-6/ Tue, 06 Apr 2021 14:18:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=255 Due TUESDAY, April 20 by 11:59PM Eastern NOTE DAY CHANGE (CARNIVAL EXTENSION)
Peer Grading Due By Monday, April 26 by 11:59 Eastern (NO CHANGE)

Physical Models and Patterns

In this project, you will explore physical models and pattern generators. Your main composition task is to generate music using physical model functions built into Nyquist. One problem you will encounter immediately is that just as a violin does not play itself, physical models require careful control to create musical sounds. The warm-up exercises are designed to explore physical models and their control.

1. Warm-Up Exercises

Using clarinet-freq (accept no substitutes!), create a sequence of clarinet tones separated by 1s of silence as follows:

  1. Executing F2() should play a 1-second tone at pitch C4 with no vibrato and a breath (roughly the amplitude control) envelope that sounds natural to you.
  2. Executing F3() should play a 2-second tone at pitch D4 with breath envelope vibrato. (See Note A on vibrato at the end of this assignment.) Design a vibrato that you think sounds natural. You will find the clarinet model has a fairly sudden threshold below which there is no oscillation and above which the tone is fairly strong. If breath vibrato causes the breath envelope to cross this threshold, the clarinet tone may pulse on and off. Control vibrato so that this does not happen. (We hope it’s obvious by this time in the course that if you need the breath envelope to oscillate in a vibrato-like manner, you should add to your breath envelope a low-frequency sine, e.g. with lfo, scaled appropriately, and perhaps multiplied by another envelope if you do not want the full lfo amplitude from the beginning. Wikipedia has a fine discussion and sound examples if you need to learn about vibrato, but note that here we are “vibrating” the breath and not using frequency vibrato.)
  3. Executing F4() should play a 5-second tone at pitch E4 that has a slow crescendo. The sound should start with a noisy breathy sounding attack that barely has any pitch. Within 1 second, the pitch should be clear. The crescendo should obviously continue until at least 4 seconds. You will find the clarinet model is very sensitive to the breath envelope and there is a very narrow range of envelope values over which a crescendo takes place. You will find that only a small amount of crescendo is possible with this model. You will need to determine good envelope values experimentally. (See Note B on RMS at the end of this assignment for more tools.)
  4. Executing F5() should play the 8-second sequence (F4, G4, A4, rest, F4, G4, A4, rest) where elements have IOI’s of 1 second. The first 3 “notes” should be made with one call to clarinet-freq(), using the frequency control to change pitch. (Hint: to get from F4 to G4, the frequency envelope should step up by step-to-hz(G4) - step-to-hz(F4). Hint 2: For the best sound, the frequency control transitions should take around 30ms rather than jumping instantaneously from one pitch to the next.) The second 3 notes should be separate calls to clarinet-freq(). Try to get a continuous sound similar to the first 3 notes by slightly overlapping F4 to G4 to A4. (Hint: Overlapping is easy if you use a score or write expressions where you can explicitly control start times and durations as opposed to using the seq() construct. If you want to use seq(), you can still get overlap if you set the logical stop time of sounds to be 1s but make the actual duration longer. See set-logical-stop in the Nyquist Reference Manual.)
  5. Executing F6() should run one instance of score-gen with pattern generators to create a score named p6score. The score should have 4 contiguous sections. In each section, 20 pitches are generated by randomly selecting a pitch (score event attribute name pitch:) from the scale C4, D4, E4, F4, G4, A4, B4. These 20 pitches are transposed by -12, 0, or 12, the amount being chosen at random. In other words, in each section there will be 20 pitches from the 3rd, 4th or 5th octave. Each note should have a duration and ioi of 0.2 seconds, for a total duration of 0.2 * 20 * 4 = 16 seconds. Finally, the piece should never generate 2 consecutive sections in the same octave. (The :max 1 attribute can be used for each item in make-random to prevent a repeated selection. See Note C at the end of this assignment for an example.) F6() should not play p6score: You can use exec score-play(p6score) to play your score, and if you like, define F7() to do this with one mouse click in the IDE. The sound should be incorporated into p6warmup.wav as described in the next paragraph.

Put your code (for all of these tones and sequences) in p6warmup.sal. Concatenate all the sounds with some silence separating them in p6warmup.wav.

2. Composition

Use score-gen with pattern generators to algorithmically compose a piece for physical models. You can use any of the physical models in Nyquist: clarinet, sax, flute, bowed, mandolin, wg-uniform-bar, wg-tuned-bar, wg-glass-harm, wg-tibetan-bowl, modal-bar, and sitar. (Acknowledgment: these unit generators are ported from Synthesis Tool Kit by Perry Cook and Gary Scavone.)

Your piece should have multiple pattern generators, including nested pattern generators to get variation at more than one time scale. For example, if you have a cycle of pitches, you could add an offset to the cycle on each repetition so that you hear the melodic cycle transposed each period. Alternatively, you might generate small random pitch intervals at the small time scale and have the average pitch slowly drift up or down at a larger time scale. You can use make-window and make-repeat to further manipulate streams of data.

You should also have at least two “voices” as opposed to a single melodic line. Consider panning, reverberation, other effects, and careful mixing to make your composition musical and interesting.

Write a statement about the intention of your composition in p6comp.txt. In p6comp.txt, also describe how you used pattern generators to achieve your results and how you used nested patterns. (Your p6comp.sal should also be commented to make your algorithms understandable.)

Duration should be between 45 and 60 seconds. Hand in the following files:

p6comp.sal – the code.
p6comp.wav – the sound file.
p6comp.txt – a short statement of your intention in the composition.

3. Pattern Examples

To get you started on some advanced pattern generation, here are some interesting examples to study:

  • This example repeats each value in the cycle 7 times, expanding the cycle to a longer time scale. Without the ” for: 1″ parameter, the whole cycle (one full period) would repeat 7 times (but cycle would repeat the data anyway, so what’s the point?) With the ” for: 1 “, the generated period is only 1 so a period is completed after each number is output, so each number is repeated 7 times.

make-copier(make-cycle({24 36 48 60}, for: 1), repeat: 7)

  • This example adds two streams. This is one way to get behavior at different time scales:

make-sum(long-term-pattern-generator, short-term-pattern-generator)

  • This example makes a series starting at 5 and increasing by 2 each time. The make-line({5 2}) pattern returns 5, 2, 2, 2, …, and make-accumulate sums the series to get 5, 7, 9, 11, …

make-accumulate(make-line({5 2}))

When all else fails and you really want a specific computation, you can use make-eval() to invoke a function to compute a stream of numbers. In this example, a custom function, myfunc, does some computation and returns a value to incorporate into a pattern stream. mypat uses make-eval to call myfunc.

define myfunc() return real-random()
set mypat = make-eval({myfunc})

Of course, myfunc() could also access and modify data from another pattern. The simple way to do this is using a global variable since make-eval does not have any way to accept parameters and SAL cannot construct closures.

Hand-In Summary/Check-list

(.wav and .aif files are acceptable)

  • p6warmup.sal — code for Warm-Up Exercises, defines F2(), F3(), F4(), F5(), F6()
  • p6warmup.wav — 5 sounds from Warm-Up Exercises with a bit of silence separating them
  • p6comp.sal — code used for your composition
  • p6comp.wav — composition, between 45 and 60 seconds duration
  • p6comp.txt — a short statement of your intention for the composition

Note A: Clarinets and Vibrato

Some might say clarinets should not use vibrato. Eddie Daniels has a nice discussion and demonstration of clarinet vibrato, including breath vibrato created mainly by variations in air pressure, and frequency or lib vibrato, which varies the fundamental frequency while keeping the amplitude and breath pressure more or less constant. See Vibrato On The Clarinet?! with Eddie Daniels.

Note B: Using RMS

The RMS function in Nyquist computes the “root mean square” or average power in a signal. Technically, RMS computes the answer to the following question: If instead of an irregular, oscillating signal, I wanted to substitute a single, constant amplitude value and obtain the same overall power, what value would I use? RMS is a good way to estimate an amplitude envelope from a signal.

RMS takes three parameters: (1) the signal to analyze, and (2) the analysis rate, (3) the window size. By default, the analysis rate is 100Hz, meaning that RMS computes an amplitude value for every 10ms (1/100 s) of the signal, and the window size default is 10ms, meaning that each amplitude value corresponds to the average power in that 10ms region of the input signal.

In “warmup” exercise 3 (function F4), you are asked to make a crescendo. You can plot the resulting envelope of the computed signal using something like: plot rms(my-computed-signal). The clarinet-freq model has a narrow range of amplitude variation (unlike a real clarinet), so you might expect to see an RMS envelope like the following:

Note C: Random selection without repetition

A very musical variation of random selection is random selection where you never repeat anything from a list of selections. With make-random, you can give weights, control repetitions and even force repetitions. See the manual for details. Here is an example of selecting A, B, or C where there are no immediate repetitions of A or B, but C is allowed to repeat:
set abc-pat = make-random({{A :max 1} {B :max 1} C})
loop repeat 20 exec prin1(next(abc-pat)) end
OUTPUT: BACABACBABABCCACACBC

Notice in the output that AA and BB never occur.

]]>
Project 5 https://courses.ideate.cmu.edu/15-322/s2021/project-5/ Tue, 23 Mar 2021 14:38:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=238 Due TUESDAY, April 6 by 11:59PM Eastern – SPECIAL DATE
Peer Grading still due on Monday, April 12 by 11:59PM Eastern. (EXTENDED TO WED, APRIL 14 by 11:59PM Eastern)

Sampling and Spectral Processing

Preliminaries

This project includes an exercise on sampling, and exercise on spectral processing, and a composition using spectral processing.

Example code and sounds are included in this download zip file for Project 5.

Sampling Exercise

Find a sound to loop from freesound.org or record a sound, e.g. your singing voice with Audacity or your favorite audio editor. You should look for a sustained musical tone (anything with pitch and amplitude changes will be harder to loop). Save it as p5src.wav. (Note: You may use AIFF files with extension .aiff any place we ask for .wav files.) Your goal is to find loop start and end points so that you can play a sustained tone using Nyquist’s sampler() function. Create a file named p5sampler.sal for your work.

After making sounds with the sampler function, multiply the result of calling sampler by a suitable envelope, and encapsulate your sampler code in a function named mysampler with at least the keyword parameter pitch: and no positional parameters. Remember that the envelope should not fade in because that would impose a new onset to your sample, which should already have an onset built-in. To get an envelope that starts immediately, use a breakpoint at time 0, e.g., pwl(0, 1, ..., 1).

Make a sound file with two 2-second-long tones (your loop must be much less than 2 seconds long). Your final output, named p5demo.wav, should be generated in Nyquist, using the expression:

seq(mysampler(pitch: c4), mysampler(pitch: f4)) ~ 2

Turn in your source code as p5sample.sal. This file should only define mysampler and any other functions you wish. Do not automatically run code or write files when the code is loaded.

Hints: Recall that sampler() takes three parameters: pitch, modulation, and sample. pitch works like in osc(). modulation works like in fmosc(). Please use const(0, 1), which means “zero frequency modulation for 1 seconds.” (You can then stretch everything by 2.) The sample is a list: (sound pitch-step loop-start) where sound is your sample. If you read sound from a file with s-read(), you can use the dur: keyword to control when the sound ends, which is the end time of the loop. pitch-step is the nominal pitch of the sample, e.g., if you simply play the sample as a file and it has the pitch F4, then use F4. loop-start is the time from the beginning of the file (in seconds) where the loop begins. You will have to listen carefully with headphones and possibly study the waveform visually with Audacity to find good loop points. Your sound should not have an obvious buzz or clicks due to the loop, nor should it be obviously a loop – it should form a sustained musical note. While in principle, you can loop a single cycle of the waveform, in practice, you will find it is very difficult to find loop points that sound good. Therefore, it is acceptable to use a much longer sample, perhaps in the range of 0.2 to 0.5s. It may be tempting to use a loop longer than 0.5s, but for this assignment we must hear at least a few loops in the 2s notes you create, so the start time must be less than 0.5s and the sample_duration – starttime (= loop duration) should also must be less than 0.5s.

Warmup Spectral Processing Exercise: Cross-Synthesis

In this part, you will be experimenting with spectral processing. Spectral processing means manipulating data in the spectral domain, where sound is represented as a sequence of overlapping grains, and each grain is represented as a weighted sum of sines and cosines of different frequency components (using the short-time discrete Fourier transform and inverse transform for each “grain” or analysis window).

We have provided you with the spectral-process.sal and spectral-process.lsp files to help you get started. See that file for documentation and run the examples of spectral processing. Example 4 in spectral-process.sal gives an example of spectral cross-synthesis. The idea here is to multiply the amplitude spectrum of one sound by the complex spectrum of the other. When one sound is a voice, vocal qualities are imparted to the other sound. Your task is to find a voice sample (this could be you) and a non-voice sample and combine them with cross-synthesis. You may use the example 4 code as a starting point, but you should experiment with parameters to get the best effect. In particular, the len parameter controls the FFT size (it should be a power of 2). Modify at least one parameter to achieve an interesting effect. Larger values give more frequency resolution and sonic detail, but smaller values, by failing to resolve individual partials, sometimes work better to represent the overall spectral shape of vowel sounds. Furthermore, it matters which signal is used for phase and which is reduced to only amplitude information. Speech is more intelligible with phase information, but you might prefer less intelligibility. You will find several sound examples on which to impose speech in example 4 (look in there for commented options), but you should find your own sound to modulate. Generally, noisy signals, rich chords or low, bright tones are best – a simple tone doesn’t have enough frequencies to modify with a complex vocal spectrum. Also, signals that do not change rapidly will be less confusing, e.g., a sustained orchestra chord is better than a piano passage with lots of notes. Turn in your code as p5cross.sal and your two input sounds as p5cross1.wav and p5cross2.wav.

Composition with Spectral Processing

Create between 30 and 60 seconds of music using spectral processing. You can use any techniques you wish, and you may use an audio editor for finishing touches, but your piece should clearly feature spectral processing. You may use cross-synthesis from the previous section, but you may also use other, techniques including spectral inversion, any of the examples in spectral-process.sal, or your own spectral manipulations. While you may use example code, you should strive to find unique sounds and unique parameter settings to avoid sounding like you merely added sounds to existing code and turned the crank. For example, you might combine time-stretching in example 2 with other examples, and you can experiment with FFT sizes and other parameters rather than simply reusing the parameters in the examples. Hint: While experimenting, process small bits of sound, e.g., 5 seconds, until you find some good techniques and parameters. Doing signal processing in SAL outside of unit generators (e.g., spectral processing) is very slow. With longer sounds, remember that after the sound is computed, the Replay button can be used to play the saved sound from a file, so even if the computation is not real-time, you can still go back and replay it without stops and starts. Turn in your code in p5comp.sal and your final output composition as p5comp.wav. In p5comp.txt, describe the spectral processing in your piece and your motivation or inspiration behind your composition. Finally, include your source sounds in p5comp/.

Submission

Please hand in a zip file containing the following files (in the top level, not inside an extra directory):

  1. sample source p5src.wav
  2. sampler demo, about 4 seconds long: p5demo.wav
  3. sampler code: p5sampler.sal
  4. cross-synthesis exercise code: p5cross.sal
  5. cross-synthesis source sound 1: p5cross1.wav (input, 3-6s)
  6. cross-synthesis source sound 2: p5cross2.wav (input, 3-6s)
  7. cross-synthesis output sound: p5cross.wav (output, 3-6s)
  8. 30-60s composition with spectral processing: p5comp.wav
  9. composition code: p5comp.sal
  10. composition descriptive text: p5comp.txt
  11. source sound files used in your composition: in directory p5comp/
]]>
Project 4 https://courses.ideate.cmu.edu/15-322/s2021/project-4/ Tue, 09 Mar 2021 17:59:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=220 Granular Synthesis

Due Monday, March 22 by 11:59PM Eastern
Peer Grading Due Monday March 29 by 11:59PM Eastern

IMPORTANT HANDIN NOTE: We are making a small change to project 4, in that we are asking students to include a .wav for each section of part1( ), part2( ), part3( ), and part4( ) in their submission. This is to make the peer grading process easier for everyone (and to avoid safety issues in running other students’ code directly). Please resubmit if you’ve already submitted to include the .wavs for part1( ) through part4( ).

Granular synthesis is an extremely versatile technique to create sounds closely associated with the world of electronic and computer music.

In this project, you will experiment with some of the possibilities by creating 4 directed examples and a short but open-ended composition.

We will introduce two techniques: Grains from sound files, and synthesized grains.

Grains from Sound Files

  1. The file p4.zip contains everything you need to get started. You might begin by loading proj4base.sal file, which should process the included sample.wav. See the .sal code for documentation on how it works. As is, proj4base.sal uses file-grain to produce individual grains and make-granulate-score to make a Nyquist score consisting of many grains.
  2. Copy proj4base.sal to proj4.sal, which you will hand in.
  3. Find some sounds of your own choosing to granulate. Often, non-music sources are the most interesting. Sometimes, music works well if it is highly stretched (see below). Use a mono (1-channel) sound file as input. If you have a stereo file, convert it to mono before you continue.
  4. Try it out: Modify proj4.sal to granulate your sound. Note how the code references sample.wav as "./sample.wav". The "./" means “in the current directory”; otherwise, Nyquist will look in the default sound file directory, so be sure to use the "./" prefix to reference your sound file, and put your sound file in the same directory as proj4.sal.
  5. Edit your sound to have a length of at most 10s (that’s all you need, and it will conserve our server disk space).

Part 1 – Simple Granular Stretch

  1. In proj4.sal define the function part1() that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
    • Grain duration = 100ms
    • Grain IOI = 50ms
    • Total resulting sound duration = (approximately) 3s
    • Stretch factor = 3 (thus the grains are taken from a roughly 1s interval of the sound file, which is “stretched” by a factor of 3 to produce about 3s of sound).
    • Do not change the playback speed of grains and include all the grains in the sound (density is 1.0).
    • The use of randomization of the file time of each grain is optional. You can use the default randomness value of 1s (from proj4base.sal‘s make-granulate-score function, but you might also like to hear a less scrambled time-stretch effect by setting randomness to 0.
  2. When you are finished, play part1() should play about 3s of sound.

Part 2 – Pitch Shifted Grains by Changing Sample Rates

  1. Changing Pitch: Rather than playing the grains back at the original speed, grain pitch can be changed by resampling them so that they play faster or slower. Recall that sounds are immutable, so how do you play a sound faster or slower? Answer: the sound behavior makes a sound stretchable and shiftable. Grain is already in an environment with a stretch factor and the stretch factor has already been applied to compute grain, so if we just wrote sound(grain) or sound(grain) ~ 1.5, the current environment stretch factor would be applied again – not good. Instead, we use “absolute stretch” (~~) to replace the stretch factor in the environment with an absolute value while evaluating stretch(). The result is that the grain is resampled to make it longer or shorter, depending on the value of the stretch factor. Find the expression (sound(grain) ~~ (1.0 / speed) in the function file-grain to see how this works in the code.
  2. In proj4.sal modify make-granulate-score to accept a speed: keyword parameter (default value 1) that passes the speed value through the score to calls on file-grain so that you can control the pitch shifting of grains.
  3. In proj4.sal define the function part2() that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
    • Grains are transposed up by one octave (a factor of 2)
    • [The remaining parameters are the same as in Part 1.]
    • Grain duration = 100ms
    • Grain IOI = 50ms
    • Total resulting sound duration = (approximately) 3s
    • Stretch factor = 3 (thus the grains are taken from a roughly 1s interval of the sound file, which is “stretched” by a factor of 3 to produce about 3s of sound).
    • Include all the grains in the sound (density is 1.0).
    • The use of randomization of the file time of each grain is optional. You can use the default randomness value of 1s (from proj4base.sal‘s make-granulate-score function, but you might also like to hear a less scrambled time-stretch effect by setting randomness to 0.
  4. When you are finished, play part2() should play about 3s of sound. The sound should of course be an octave higher (but the same duration) as in Part 1.

Part 3 – Lower Density, Sparse Grains

  1. Rather than playing all the grains, creating a dense overlapped texture, you can randomly drop grains to produce a sparser texture. The make-granulate-score function includes a density parameter that gives the probability (from 0 to 1) of appending any given grain to the score as it is constructed.
  2. In proj4.sal define the function part3() that will produce a granular synthesis sound (taking grains from your chosen sound file) with the following parameters:
    • Density is 10% (0.1)
    • [Note that many of the remaining parameters are also different from Parts 1 and 2.]
    • Grain duration = 20ms
    • Grain IOI = 10ms
    • Total resulting sound duration = (approximately) 3s
    • Stretch factor = 0.3 (thus the grains are taken from a roughly 10s interval of the sound file, which is “stretched” by a factor of 0.3 to produce about 3s of sound). If the sound you chose to granulate is shorter than 10s, then adjust the parameters to granulate all of your sound and produce about 3s of output.
    • The use of randomization of the file time of each grain is optional.
      • When you are finished, play part3() should play about 3s of sound. The sound should be about the original pitch of your sound file, but the sound should be stuttering or pulsing irregularly with short grains of sound.

Part 4 – Synthesized Grains

  1. Rather than pulling grains from a sound file, you can synthesize grains. It is common to use simple techniques such as sine tones, table-lookup oscillators or FM, and granular synthesis gives many additional parameters including grain density, grain duration, pitch distributions, etc.
  2. In proj4.sal, modify make-granulate-score to accept another parameter to select synthesized grains. E.g. you can add a boolean parameter sinegrain (default #f) to enable calling sine-grain instead of file-grain. Alternatively, you could provide the name of the grain function to use in a parameter to make-granulate-score, e.g. grainfunc (default quote(file-grain)).
  3. In addition, the grains in the score generated by make-granulate-score should be different when synthesized grains are selected. The function sine-grain is already (partially) implemented, and has two keyword parameters: low and high. In make-granulate-score, you can put these parameters into the score, or you can just use the default values.
  4. You will need to complete the implementation of sine-grain to randomly choose the pitch to synthesize, based on the keyword parameters low and high.
  5. In proj4.sal define the function part4() that will produce a granular synthesis sound using synthesized grains with the following parameters (which are otherwise the same as Part 1):
    • Grain duration = 100ms
    • Grain IOI = 50ms
    • Total resulting sound duration = (approximately) 3s
  6. When you are finished, play part4() should play about 3s of sound. The sound should consist of clean, rapid, bubbly sinusoidal sounds with a diversity random pitches.
  7. Notice how different this sounds from Part 1, even thought the grain parameters are about the same and only the content of the short (100ms) grains is different.

At this point, since you have modified proj4.sal for each part of the project, you should go back and test part1(), part2(), part3() and part4() to make sure they all still work as expected. Merely loading proj4.sal should not play sounds automatically.

Composition With Granular Synthesis

Copy your proj4.sal file to proj4comp.sal so that you do not “break” anything in Parts 1-4.

Using all the tools that you have built, create a 20- to 40-second composition. In your piece, you should at least include:

  • Substantial use of granular synthesis, with
  • Dynamic amplitude, and
  • Dynamic pitch control

You may wish to alter your code further to incorporate amplitude and pitch control. Pattern generators are an excellent way to get some control over variations in parameters. You should also experiment with other parameters. In fact, ALL parameters should be considered fair game for experimentation. Shorter and longer grains have a very different sound. Very high stretch factors, very small ioi, and other changes can be very interesting, so please spend some time to find amazing sounds.

You are also free to make many granular synthesis sounds and mix them in Nyquist or in an audio editor such as Audacity.

Create the plain text file proj4comp.txt to explain:

  1. how you vary the amplitude and pitch.
  2. Also include any additional effects that you added.
  3. Lastly include your motivation or inspiration behind your composition.

Grading Criteria

Grading will be based on meeting the technical specifications of the assignment:

  • the code should work, it should be readable, and it should have well-chosen parameters that indicate you understand what all the parameters mean.
    • be sure to define part1, part2, part3 and part4 functions in your proj4.sal.
    • loading your proj4.sal code should not immediately compute sounds.
  • submissions are anonymous.
  • It’s best to envision what you want to do, write it down in comments, then implement your design. You can use the sample code, but should show understanding of what you are doing — make this your own creation.
  • Your composition should be 20 to 40 seconds long.
  • the use of granular synthesis should be clearly audible and clearly related to the sound that you analyzed.
  • Your piece should show musical results, effective granular synthesis, mixing quality, and general effort. Bonus points may also be awarded for exceptional work.

Turning In Your Work

Please hand in a zip file containing the following anonymous files (in the top level, not inside an extra directory). Note that the following must be strictly adhered to for the convenience of machine and human graders (including you as a peer grader):

  • Code for parts 1-4, named proj4.sal
  • Code created for your composition, named proj4comp.sal, Note: Code should be clearly commented in case we want to run it.
  • 4 Sound Files (.wavs), one each for parts1-4 named part1.wav, part2.wav, part3.wav, and part4.wav
  • Your composition audio, named proj4comp.wav (or .aiff).
  • Input sound for parts 1-4, named whatever you want, but it should be the filename assumed in parts 1, 2 and 3 so that part1(), part2() and part3() will run.
  • Please DO NOT include or submit sample.wav from p4.zip! It should not be needed to run your code, and we do not need another copy from every student, thank you.
  • A short statement about your composition (be sure to include the 3 points mentioned above), named proj4comp.txt
]]>
Project 3 https://courses.ideate.cmu.edu/15-322/s2021/project-3/ Tue, 23 Feb 2021 21:10:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=169 Deadlines:
Due Monday, March 8, 2021 by 11:59PM Eastern.
Peer Grading Due Monday, March 15, 2021 by 11:59PM Eastern.

CORRECTION(S)

Use im instead of imod (edits made below).

Download

p3.zip

Introduction

In FM Synthesis, the depth of modulation (D) and the frequency of modulation (M) control the number of generated significant sideband pairs together, using the following formula: I = D / M, where I is the Index of Modulation (a unit-less number). We can measure depth of modulation in Hz, which means how many Hz by which the carrier frequency deviates. When D = 0, there is no sideband generated (see FM Synthesis (course notes) for more detail.)

Part 1: Create an FM Instrument

Create an FM instrument, a function named fminstr, in the file proj3.sal , that takes the following keyword parameters:

  • pitch: the carrier frequency in steps (a number)
  • im: a time-varying index of modulation (of type SOUND)

You may use additional parameters to control loudness (vel:), vibrato, C:M ratio, envelope parameters, etc. The stretch factor should allow you to control the duration of the instrument sound. The default behavior should be a tone with a fundamental frequency controlled by pitch and where higher im produces more and stronger harmonics.
Be sure that your instrument uses an envelope to control overall amplitude. If you run play osc(g4) in Nyquist, you will hear an example of what your instrument should not sound like!

An example that plays the instrument is:

play fminstr(pitch: g4, im: const(0, 1)) ~ 2

In this example, the im parameter is the constant zero with a duration of 1, so this is expected to play a boring sine tone at pitch G4.

Create a function named part1 with no parameters, in the file proj3.sal . Running this should instantiate a single instance of your fminstr and return the sound, which should last about as long as the stretch factor. So the command play part1() ~ 3 should play about 3s and will demonstrate that you completed Part 1. Your part1 function can pass in and demonstrate optional keyword parameters you have added.

Hint: Be sure to test part1 with different durations (stretch factors).

Part 2: Time-Varying Index of Modulation

Demonstrate the instrument by using PWL to create an interesting Index of Modulation. In other words, replace const(0, 1) in the previous example with an interesting, time-varying Index of Modulation. You can use the envelope editor in the Nyquist IDE to draw an envelope if you like, and you can apply scale factors and offsets to the PWL function to fine tune your sonic result.

Keep in mind that if you put in a long PWL envelope (or any other envelope), your fminstr() code will not magically deduce you want all of the other components to stretch to the duration of the envelope.

Create a function named part2 with no parameters, in the file proj3.sal . Running this should instantiate a single instance of your fminstr with your PWL modulation and return the sound, which should be 3 to 5s long and contain obvious changes in brightness due to a time-varying im. So the command play part2() will demonstrate that you completed Part 2.

Part 3: Composition using Spectral Centroid Analysis

For a (hopefully) more interesting composition, we are going to analyze a real sound, extract the time-varying spectral centroid, and use that to control the Index of Modulation of one or more FM sounds.

Read the documentation on spectral centroids in the accompanying files (look in the downloaded directory), and try out the project3-demos.sal that is provided. To summarize, you call spectral-centroid() on an input sound (or filename), and it returns an envelope that tracks the spectral centroid of the input sound.

The idea here is that when the input sound gets brighter, the spectral centroid goes up. If we use spectral centroid to control the Index of Modulation, an increasing spectral centroid should cause the Index of Modulation to increase, and therefore the FM sound should get brighter. Thus, there should be a connection between spectral variation of the source sound analyzed with spectral-centroid() and the spectral variation of your synthesized sound.

Your technical task is to make this connection between input sound and output sound. This is a rare case where we are going to suggest assigning the spectral centroid (sound) to a global variable. If you do that, then any note can reference the spectral centroid. For example:

set *sc* = 0.003 * spectral-centroid(...)
play seq(fminstr(pitch: c4, im: *sc*),
         fminstr(pitch: c4, im: *sc*) ~ 2)

This plays two notes. The first runs nominally from 0s to 1s, and it will use the first second of the spectral centroid sound to control its Index of Modulation. The second note runs nominally from 1 to 3s (the duration is 2 because of the stretch operator ~), and this note will use the spectral centroid from 1 to 3s. It is important to note that the second note does not begin “reading” the *sc* variable from the beginning. This is consistent with the idea that, in Nyquist, sounds have an absolute start time.

  • Note 1: You could align the beginning of the spectral centroid with the beginning of the note by replacing *sc* with cue(*sc*). The cue behavior shifts its sound parameter to start at the time specified in the environment. However, we think it will be more interesting to let the IM evolve throughout your piece and let each note access the current state of the spectral centroid as the piece is being playing.
  • Note 2: Why did we multiply spectral-centroid(…) by 0.003? The point is that spectral-centroid is measured in Hz and will probably range into the thousands, but reasonable values for the Index of Modulation are probably going to be in the single digits. Is 0.003 the right scale factor? No, it is just a rough guess, and you should adjust it.
  • Note 3: You can use a score here rather than SEQ.

PROGRAMMING TASK: Your musical task is to create something interesting, with a duration of 30 to 60 seconds. Store your code for part 3 in proj3comp.sal. We do not want to box you in to a specific procedure, but we are looking for a result that shows interesting spectral variation driven by the spectral centroid of a source sound.  Some possibilities include:

  • Make a melody and use the spectral centroid to control modulation.
  • Use your voice to produce a sound for analysis by spectral centroid.
  • If the spectral centroid is not as smoothly varying as you want it to be, consider using a low-pass filter, e.g. lowpass8(*sc*, 10) will remove most frequencies in *sc* above 10 Hz, leaving you with a smoother control function. (See the project3-demo.sal file.)
  • Rather than a melody, use long, overlapping tones to create a thick texture.
  • Use a mix of centroid-modulated FM tones and other tones that are not modulated. Maybe the modulated tones are the foreground, and unmodulated tones form a sustained background. Maybe the modulated tones are a sustained “chorus” and an unmodulated melody or contrasting sound is in the foreground.
  • The FM depth of modulation can be the product of a local envelope (to give some characteristic shape to every tone) multiplied by the global spectral centroid (to provide some longer-term evolution and variety).
  • Add some reverb, stereo panning (see the PAN function), or other effects.
  • Experiment with different source sounds. Rock music is likely to have a fairly constant high centroid. Random sounds like traffic noise will probably have low centroids punctuated by high moments. Piano music will likely have centroid peaks every time a note is struck. Choose sources and modify the result with a smoothing lowpass filter (see 3 above) if needed to get the most interesting results
  • You may use an audio editor for extending/processing/mixing your piece, but we must be able to hear clearly your instrument, score, and spectral-centroid-based modulation.

Grading Criteria

Grading will be based on meeting the technical specifications of the assignment:

  • The code should work, it should be readable, and it should have well-chosen parameters that indicate you understand what all the parameters mean.
    • Be sure to define fminstr and part1 and part2 functions in your proj3.sal.
    • Loading your proj3.sal code should not immediately compute spectral centroids or FM tones.
  • Your submissions must be anonymous.
  • Regurgitating minor variations of the sample code does not indicate understanding and may result in large grading penalties.
  • It’s best to envision what you want to do, write it down in comments, then implement your design. You can refer to the sample code, but should avoid using that as a template for your project — make this your own creation.
  • Your composition should be 30 to 60 seconds long.
  • The FM index of modulation control should be clearly audible and clearly related to the sound that you analyzed.
  • Your piece should show musical results, effective FM instrument design, novel control and composition strategies, mixing quality, and general effort. Bonus points may also be awarded for exceptional work.

Turning In Your Work

Note that the following must be strictly adhered to for the convenience of machine and human graders (including you as a peer grader):

  • Code for part 1 and 2, named proj3.sal
  • Code for part 3, named proj3comp.sal, Note: Code should be clearly commented in case we want to run it.
  • Audio results for part 2, named proj3part2.wav
  • Audio results for part 3, named proj3comp.wav
  • Input sound analyzed for part 3, named proj3input.wav
  • A short statement of your intention of the composition, named proj3comp.txt (If you used an audio editor to modify the output of part 3, describe how the sound was modified clearly so we can determine how to relate your part 3 results with your audio results.)

All of these files must be in the top level of the zip file, not in a subfolder within the zip file. Remember to keep your submission anonymous for peer grading.

]]>
Project 2 https://courses.ideate.cmu.edu/15-322/s2021/project-2/ Tue, 09 Feb 2021 16:02:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=118 Deadlines:
Due 11:59 pm Eastern on February 22 2021.
Peer grading due 11:59 pm Eastern on March 1, 2021.

Download

P2.zip

Envelopes

Envelopes are very important in sound synthesis. Envelopes are the primary way to “shape” sounds from various sound sources so that the sounds do not suddenly pop on and off or blast at a constant level.

You should have learned to use both “pwl” and “env” for envelopes in the on-line lessons. Hint: Look up “pwl” and “env” in the Nyquist Reference Manual and pay attention to their behaviors under different stretch environments.

Composition Programming

The following code creates a short score and plays it when you click the “F2” button in the Nyquist IDE (maybe your keyboard F2 key will work too):

variable *ascore* =
      {{0 1 {note pitch: c2 vel: 50}}
       {1 0.3 {note pitch: d2 vel: 70}}
       {1.2 0.3 {note pitch: f2 vel: 90}}
       {1.4 0.3 {note pitch: g2 vel: 110}}
       {1.6 0.3 {note pitch: a2 vel: 127}}
       {1.8 2 {note pitch: g2 vel: 100}}
       {3.5 2 {note pitch: d2 vel: 30}}
       {3.7 2 {note pitch: d4 vel: 70}}
       {3.9 2 {note pitch: d5 vel: 70}}
       {4.1 2 {note pitch: d6 vel: 50}}
       {4.9 0.5 {note pitch: g4 vel: 40}}
       {5.2 2.5 {note pitch: a4 vel: 30}}}
    
function f2() play timed-seq(*ascore*)

variable *bscore* ; to be initialized from *ascore*

function f3() play timed-seq(*bscore*)

Create a file named instr.sal, paste this code in, and try it.

Define an Instrument

Your first task is to define a new instrument, also in the file instr.sal, that multiplies the output of a table-lookup oscillator by an envelope. The instrument should consist of

  1. a table created as shown in the lectures using the build-harmonic function and containing at least 8 harmonics,
  2. an oscillator (use osc), and
  3. an envelope.

The instrument should be encapsulated in a function named simple-note that at least has the two keyword parameters seen in *ascore* (pitch: and vel:). The vel: parameter represents velocity, a loudness control based on MIDI key velocity which ranges from 1 to 127. Convert MIDI key velocity to amplitude using the vel-to-linear function (see the Nyquist Reference Manual). Your simple-note instrument function should satisfy these requirements:

  • starting time is of course the default start time from the environment,
  • duration is determined by the stretch factor in the environment, e.g. simple-note(pitch: c2, vel: 100) ~ 2.6 should have a duration of 2.6s,
  • maximum amplitude is proportional to vel-to-linear(vel), where vel is the value of the vel keyword parameter,
  • pitch is controlled by the pitch keyword parameter, which gives pitch in “steps”, e.g. C4 produces middle-C.
  • Any “instrument” function should return a sound (and not play the sound).

Play *ascore* with your instrument

To show off your instrument, transform *ascore* using one of the score-* functions to use your instrument rather than note. (It is your task to become familiar with the score-* functions and find the one that replaces function names.) Assign the transformed score to *bscore* at the top level of instr.sal; for example, you might have something like the following in your code:

set *bscore* = function-to-transform-a-score(*ascore*, perhaps-other-parameters)

You may add one or more additional functions to instr.sal as needed.

Now, F3 (as defined above) should play the altered score, and it should use your simple-note instead of the built-in piano note function note. The resulting sound should demonstrate your instrument playing different pitches, durations, and velocities.

Record Tapping

Record a short period of rhythmic tapping sound (<10 sec) and save as tapping.wav. This can be your own tapping or some other source.

Tap Detection

Download the P2.zip file, which contains code, a test sound, and some documentation.

You will now create a new file named tapcomp.sal, which will use the tapping.wav data to create a composition. In tapcomp.sal, load the given code tap-detect.sal to detect a list of tap times and tap amplitudes of tapping.wav. You must use load tap-detect.sal, e.g. do not specify any path such as load P2/tap-detect.sal. Refer to the HTML documentation for tap-detect and inspect the tap-detect code to get an idea of how you can generate a score based on this code.

Convert Taps to Score

Add your own functions to tapcomp.sal to map the list of tap times and amplitudes into a Nyquist score named *tapscore* that plays your instrument from instr.sal (so of course, you should have the statement load "instr.sal" to define your instrument). When you are done, loading tapcomp.sal should immediately run your code and produce *tapscore*, which should be a direct 1-to-1 translation of the taps to a score.

NOTE 1: Do NOT use score-gen. Instead, you must use loop and list primitives to construct a score. You may use the linear-to-vel function to convert tap strength to a velocity parameter.

NOTE 2: Note duration is up to you. It fixed, random, or related to the tap inter-onset times.

Make Music

Create between 20 and 40 seconds of music using your code. You can create new scores based on your tapscore (but please do not modify *tapscore*), mix multiple sounds or “tracks,” you can create sounds by other means (as long as at least one “track” is derived according to the recipe above), and you can manipulate your results in an audio editor (again, as long as the tap-driven instrument is clearly heard). Keep all the code for your piece in tapcomp.sal, but do not compute your piece immediately when the code is loaded (at least not in the version you hand in). It may be convenient to use F4 and other buttons in the IDE to run different pieces of your code. E.g. you could add something like this to tapcomp.sal:

function F4() play compute-my-tap-piece(*tapscore*)

Do NOT write any play command or commands to compute your piece at the top-level. E.g. do NOT write something like

play compute-my-tap-piece(*tapscore*)

unless it is inside a function (like F4 above), and the function does not get called when the file is loaded.

Name your composition sound file p2comp.wav (AIFF format is acceptable too.)

Describe Your Intentions

Create a simple ASCII text file p2comp_README.txt or pdf file p2comp_README.pdf with a sentence or two describing your goals or musical intentions in creating p2comp.wav.

Submission

Submit files in a zip file to ATutor. The following files should be at the top level in your zip file (you may use .aiff sound files instead of .wav sound files):
instr.sal: A SAL program implementing your table-lookup instrument,
tapping.wav: The recorded taps used as input to tap-detect.sal,
tapcomp.sal: A SAL program to convert tap times to a Nyquist score.
p2comp.wav: Your project 2 composition (at least 20 seconds).
p2comp_README.txt: Optional additional info on your composition.

Decibels

Read “The Bell Scale”, “The Decibel Scale (dB),” and “Decibels for Comparing Amplitude Levels” in Loudness Concepts and Panning Laws, but to summarize some important points:

  • Decibels are a unit-less measure of the ratio of power. Each Bel represents a factor of 10.
  • Since power increases with the square of amplitude, a factor of 10 change in amplitude gives a factor of 100 in power, which is 2 Bels (log10(100) = 2), which is 20dB.
  • A very useful number is: A factor of 2 in amplitude is very nearly 6dB.

Remember that 6 dB represents a factor of 2 in amplitude!

Grading Criteria

Here are some things you might want to check before your final submission. Some criteria will be checked automatically. Some will be the basis of peer grading:

  • Your simple-note is as directed, accepts 2 keyword parameters, and has proper pitch, amplitude, time and duration control.
  • You define and convert *ascore* to *bscore* that calls simple-note when instr.sal is loaded.
  • tapping.wav contains at least 10 taps.
  • *tapscore*, generated by loading tapcomp.sal, should be a valid Nyquist score. (You can use score-print to see it nicely formatted.)
  • Your submission should be complete and anonymous.
  • The audio should have acceptable quality (no clipping or unintended distortion, no bad splices, adequate overall level, etc.)
]]>
Project 1 https://courses.ideate.cmu.edu/15-322/s2021/project-1/ Mon, 01 Feb 2021 15:00:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=90 Project 1 due Monday, February 8, 2020 by 11:59PM Eastern
Peer grading due by Monday, February 15, 2020 by 11:59PM

Fade In, Fade Out, Cross-Fade
You should be familiar with the terms fade in, fade out, and cross-fade. As a reminder, here’s a schematic to illustrate these terms. Remember that, when incorporating sounds into compositions, any sound without a “natural” beginning and ending should be edited with a fade in and fade out to avoid clicks and pops at the beginning and ending. When joining sounds together, one often overlaps the fade out of one sound with the fade in of the next sound, creating a cross-fade.

Project Instructions 

We recommend that you create a director/folder on your computer for Project 1 and store all of your files and subdirectories/subfolders in this location. You will then zip the project folder when you are ready to submit. Do not name the folder in a way that will identify yourself.

  1. Obtain one or more sounds from freesound.org. (You may also record sounds of your own, including voice.) Store this sound as sound0.wav in your project folder.
  2. Using a sound between 3 and 5 seconds long, make a 1-second-long cross-fade from the sound to a copy of itself. This means the first copy of the sound will fade-out over 1 second while simultaneously the second copy fades in over 1 second, making the two sounds overlap during this 1 second period. Then, operating on the combined (spliced) sound, fade out over 2 seconds. Store this sound as sound1.wav in your project folder.
  3. Using the same original sound source, make a 20-ms-long cross-fade from the sound to a copy of itself. Then, operating on the combined (spliced) sound, fade out over 100 ms. Store this sound as sound2.wav in your project folder.
  4. Use Audacity to manipulate your sounds using splicing, mixing, and any effects you wish. The final mix should have a duration of 30 to 60 seconds. Your mix should either be a soundscape — layers of sound with different amplitude, panning, and effects — or a “beat” — a rhythmic sequence that loops. In either case, you should construct something original rather than using a ready-made sound. Use the envelope controls in Audacity to shape the loudness contour. E.g. you can fade in and out, ramp up to a climax, or create rhythm with amplitude change. Some form of loudness control should be obvious in your result. Your composition may be mono or stereo. Store your final mix as comp.wav in your project folder, and store the original sounds you used to create the mix in a subfolder called origin. (NOTE: If you have more than 10MB or 1 minute of sound, include representative sounds, keeping the total data in the origin subfolder under 10MB.)
  5. Write answers to the following questions in the plain text file answer.txt stored in your project folder at its top level. Again, do not write anything that may identify yourself; keep your submission anonymous for peer grading.
    1. State the duration of your sound from part 4. To help our autograder, type a string exactly in the form: DURATION=37S (Note: all caps including “S”, no spaces, state the decimal integer number of seconds – rounded from the true duration – followed by capital S. The string “DURATION=” should appear only once in the file.)
    2. Compute the size of the sound file assuming you save the file as a single-channel .wav file with 16-bit samples and a 44kHz sample rate. Does the number match the true size? To help our autograder, type a string exactly in the form: SIZE=180MB. (Again, all caps, no spaces, state the rounded-to-integer decimal number of megabytes – 1,000,000 bytes = 1MB – and “SIZE=” should appear only once with your answer.)
    3. What was your intention in the design/composition of your sound?
  6. Submit your project as a zip file. The following files should be at the top level in your zip file (you may use .aiff sound files instead of .wav sound files):
    sound0.wav: original sound used in Part 1 and Part 2
    sound1.wav: cross-fade/fade-in/fade-out sound, 1s cross-fade, 2s fade
    sound2.wav: cross-fade/fade-in/fade-out sound, 20ms cross-fade, 100ms fade
    origin/: a folder that contains the all the original sound files for your composition; if you have more than 10MB or 1 minute of sound, include representative sounds, keeping the total data in origin/ under 10MB.
    comp.wav: your composition
    answers.txt: short answers
    comp_README.txt: optional additional info on your composition
    origin/README.txt: optional credits and info on your source material
  7. Getting the right files in the right place is crucial. Unzip your zip file into a fresh location and check the resulting directory for each of the files and directories above. For example, your submission could contain the following:
    sound0.aiff
    sound1.aiff
    sound2.aiff
    origin/sound.aiff
    comp.aiff
    comp_README.txt
    answers.txt

Grading is based on the following points:

    • Your submission should be anonymous.
    • Your cross-faded sounds should be correctly constructed.
    • Your answers for duration and size should be correct.
    • Your composition should not have any clipping.
    • Your composition should use the Audacity envelope tool or equivalent to make obvious loudness contrasts.
    • Your composition should show other editing or processing techniques.
    • Your composition should engage the listener.

Peer-grading will be assigned after the due date and will be due Monday, February 15 by 11:59PM Eastern.

]]>
Project 0 https://courses.ideate.cmu.edu/15-322/s2021/project-0/ Sun, 31 Jan 2021 15:00:00 +0000 https://courses.ideate.cmu.edu/15-322/s2021/?p=47 Project 0 Due Wednesday, February 3, 2020 by 11:59PM Eastern

Note: Although this project does not count toward your final grade, you must do this setup project in order to prepare for all subsequent projects.

  1. Go to Canvas and complete the Academic Integrity Policy Quiz. It will require you to carefully read the course policy on our home page and the CMU university policy and then attest that you will abide by these guidelines for the course.
  2. Install Nyquist on your computer. This requires installing the Java Development Toolkit (JDK). It will install a folder fo Nyquist and a Nyquist IDE. Using the Nyquist IDE, play some sounds with the Sound Browser (find Browse button or menu item). Report any problems on Piazza!
  3. Install Audacity on your computer.
    Record and view a sound. Try the spectral view. Report any problems on Piazza!
  4. Make a sound using Nyquist by executing the following command (type in the input window, upper left):
    play pluck(c4)
    Look carefully at the output window and you will see that Nyquist saves the computed sound. Find that sound on your computer, move it to a Project0 folder and rename it to p0.wav.
  5. Create a plain text file named p0.txt in your Project0 folder. You can put any decent text in it, just be sure not to identify yourself.
  6. Combine p0.wav and p0.txt in a zip file named p0.zip and submit this to [CORRECTION:] http://www.music.cs.cmu.edu/icm-s21. Submission instructions are here.

Grading: You will receive full autograde credit if p0.wav is correctly computed and handed in with p0.txt. If you fail to get Nyquist or Audacity running, report this on Piazza in a private post (but do not post anonymously or we won’t know who you are). A teaching assistant will reach out to you as soon as possible.

Peer Grading: After the grading deadline, you will be asked to grade 3 submissions from your peers. This should be very easy since there was little to do, but we want you to experience what peer grading will look like for future projects. If there are any problems, get help on Piazza so Project 1 will go smoother. Peer Grading Due Monday, February 8, 2020 by 11:59PM Eastern.

]]>