Every 15-622 student is required to complete a term project (in addition to the composition that is also created by 15-322 students). Students may form teams, but it is essential that team members each take responsibility and credit for some aspect of the project.
An Example Project
An example is the best way I can describe the features of a good project. This project was done by Chris Yealy and is now part of Nyquist, but I will describe it as if it has not been done: The latest Nyquist IDE (based on a project from a previous class) is great for editing text and displaying sounds (at least short ones), but there is no support for entering functions of time. This project adds a new view to the IDE allowing the user to edit piece-wise linear functions of time. Each function has a name, and multiple functions can be stored in one file. The functions are implemented by defining a Lisp function containing a call to PWL. If the user creates a time function named foo in the IDE, then the user can write foo() in Nyquist code to obtain the function. When the user opens one of these files in the IDE, the IDE parses the code and can then display functions graphically. The user can add, move, and delete breakpoints, change function names, copy functions to a new name, etc. It would be great to be able to display multiple functions in different colors so they can be edited in concert. Also, it should be possible to snap to user-controlled grid lines, zoom in or out, change the vertical axis, etc.
Having convenient direct-manipulation access to functions of time opens up a new way to control synthesis. Complex sounds with several parameters can be made to evolve according to time-varying controls that are edited graphically. These parameters in the editor become a sort of musical score notation.
Why is this good? It has the following features that I am looking for:
- It’s Computer Music: there’s computation, and there’s music.
- It requires some thought about music representation and music processing.
- It extends Nyquist (or Audacity). Notice that the project is not even a complete stand-alone system, only the extension of one, so the student does less work and leverages what’s already there.
- The project isn’t too easy, but it’s not impossibly hard.
- Students will be able to use this in the future.
Your project will probably not have to have all these features, but I hope this give you a good idea of what I am looking for. Review the list above with respect to your project.
Students who are primarily composers are not expected to build complex systems, use Java or C, or write complex digital signal processing code. Your options include but are not limited to:
- port algorithms from other systems to realize an instrument (not necessarily a traditional one), to create an effect, to implement a composition algorithm, etc.
- pursue a sophisticated approach in your composition project that involves substantial use of Nyquist, probably resulting in a longer-than-required composition
- work with Jesse to develop teaching materials based on your learning experience in the course.
The Term Projects in Stages
The project has several parts:
Part 1: Proposal (due Thursday, Oct 17)
A proposal is required. Your proposal should state clearly:
- the general nature of what you will implement,
- what code, journal articles, etc. you will rely on, port, implement, modify, etc.,
- a significant milestone that you will report/deliver as the interim report
- a specific list of deliverables for your final project submission
- how will you measure your success? E.g. what functionality will you demonstrate?
All projects must be approved by the instructor, so this really is a proposal. I will usually suggest some changes, and if the project really seems out of line, I’ll let you know early and without penalty. (A clear proposal for a bad project gets a good grade and a request for another submission, a vague proposal for a good project gets a bad grade and an OK to proceed).
Submit your proposal (as a PDF file) by email to rbd at cs.cmu.edu. The deadline is 11:59pm, Thursday, Oct 18th.
Part 2: Interim Report (due Thursday, Oct 31)
An interim report is required. Ideally, you will have completed the milestone described in your project proposal. If you fail, you should describe clearly what you have done, what problems you encountered, whether this will affect your ability to finish the project as planned, and any modifications (subject to instructor approval) you feel you should make to the project. You should submit the report as a PDF file by email to rbd at cs.cmu.edu. You may attach a zip file with interim code as long as the TOTAL submission including PDF is less than 1MB. If you have sound results, more code, and other data to substantiate your report, put it somewhere on the web and include a link in your email.
Part 3: Class Presentation (Thursday in Class, Dec 5)
You will present your project to the class. Plan for 5 minutes maximum. This is a very short presentation. You should present your work in three parts. Use slides (preferably PowerPoint) to supplement your talk.
- Motivate your work. What purpose (outside of your education) is served by your project? What existing problem does your work solve? (1 slide)
- State your approach and goals. How does your project solve the problem just stated? Be sure to state clearly and with some detail exactly what you did. (1 or 2 slides)
- What is the state of your project? Tell what is complete and working as well as what would be the next step if you (or someone else) were to continue working. (1 slide)
_You should be prepared to show slides from your personal laptop. Assume there is a VGA display connector — if you do not have one, arrange to bring an adapter, test your machine in the classroom, or see the instructor. In addition, bring your presentation on a USB memory stick for emergency use of another machine.
If you have sound file examples (recommended):
- be prepared to play sound files from your laptop
- do not assume that links from PowerPoint to soundfiles will be maintained when your slides are merged into one big file.
Assuming a 4-minute talk, you can rehearse it 5 times in 20 minutes. Do it! (If this seems unnecessary and you are not a musician, do it just to learn what musicians know that you don’t — seriously.)
Part 3: The Written Project Submission (due Wednesday, Dec 4)
Your goal is to make a complete package of software, documentation, example code, and sound examples. For example, if your project were to create a phase vocoder, you would submit the software implementation, document how to apply the vocoder and use all the parameters, provide code examples that apply the vocoder to a test sound, and one or more sounds that illustrate the effect of the vocoder. Another student should be able to use the vocoder given your final submission. Put everything in a zip file, put the file on the web, and email a link to rbd at cs.cmu.edu.
A List of Project Ideas
Feel free to ask the instructor for more details on any of these.
Nyquist-related
- Extending Nyquist by writing functions in Nyquist:
- Port instruments from articles or other systems
- Develop library of effects with examples and documentation
- Read about pitch class sets in Forte, The Structure of Atonal Music and provide functions in Nyquist to generate or look up sets using Forte’s notation, test for equivalence, etc.
- Read about Cage’s Atlas Eclipticalis. Find astronomical data and create an algorithmic composition based on the data and inspired by Cage.
- Spectral features as control parameters for synthesis. What does this mean? Many interesting forms of synthesis are based on the analysis of one sound to obtain data to control another. In this case, spectral features like brightness (the average frequency weighted by amplitude), flux (derivative of spectral amplitude), and frequencies of peaks (also known as peak-tracking) can be used to obtain control information. A project would consist of some analysis software to extract spectral features and some compelling examples of sound synthesis using those features.This could be an extension of Project 3 (using Spectral Centroid to control synthesis.)
- Implementation or ports of spectral manipulation software (e.g. from Eric Lyon and Chris Penrose)
- There are “sample packs” at freesound.org. Find an interesting sample pack and build an interface via Nyquist function calls, e.g. bell-init(), and bell-note(...).
- “Autotune”: Automatic “pitch correction” software made its way from studio productions such as Cher’s “Believe” into popular culture when The Gregory Brothers autotuned news broadcasts and polital speeches to music tracks. Project suggestion: Import a speech track into Audacity and label word boundaries. Export the labels. Write Nyquist code to import Audacity labels and split the speech track into separate units. Read a Nyquist score structure indicating a sequence of target pitches and durations. Use a vocoder to synthesize speech at the indicated pitches and durations. This could be a 2-person project with one person implementing the vocoder and another processing the melody and speech data to determine input data for the vocoder and to reassemble vocoder output.
- Linux has some nice open source software synthesizers. Reimplement an entire synthesizer or just one preset in Nyquist and try to get the same sound.
- Fundamental frequency estimation: The YIN algorithm is implemented in Nyquist for fundamental frequency (pitch) estimation, but for steady tones, you can probably obtain more accuracy using autocorrelation or enhanced autocorrelation over multiple periods. The idea is to shift a signal by a few periods and compute the correlation between the shifted and orignal signals (this is autocorrelation). The autocorrelation peaks where the signal is shifted an even number of periods. By spanning multiple periods and perhaps by doing some interpolation, one can hope to get sub-sample period estimation which is necessary for tuning and other applications. Hint: The autocorrelation can be computed quickly using the FFT.
- Students sometimes ask about obtaining parameters from recorded sounds for use in synthesis. Write a Nyquist function that takes a short sound as input, performs a fundamental frequency (pitch) analysis to determine the length of periods. Resample the signal so that the period becomes a power of two. Extract an exact period from the audio and perform an FFT. The FFT bins represent the harmonics of the signal. Convert these complex values to magnitudes. Synthesize a waveform by summing sinusoids and listen to and compare a synthesized tone to the original sample. Group Project Idea: someone else in the group can take series of extracted waveforms and use spectral interpolation synthesis (see
siosc
in the Nyquist manual) to create tones with a time-varying spectrum. This project could also be combined with the previous idea for fundamental frequency estimation. - David Wessel created a very interesting synthesis technique that takes a very large FFT, estimates the peak frequencies and amplitudes, and then builds an array of amplitudes indexed by 0.01 semitone units (called “cents”). Then a few hundred sines are generated where their frequency is picked randomly according to the probability density function represented by the array. Sines play for a short time and fade out only to pick a new random frequency from the table. The result is a shimmering texture that approximates the input sound spectrum, which can be something very rich like an orchestra. I think this could be done without much difficulty in Nyquist.
Nyquist Resources
- There are a number of examples in the demos directory that are written and described as Lisp code. Port these to SAL, revise the documentation and test.
Extending Nyquist by writing functions in C
- I have some code from a previous project to read Gravis Ultrasound Sound Font files. The goal is to be able to import sounds, envelope data, etc. from these files to construct a Nyquist function that can play instruments described by these files. This would instantly expand Nyquist’s library of conventional instrument sounds.
- Alternatively, port fluidsynth (open source) to Nyquist so that Nyquist can load soundfonts and play soundfont instruments.
- M.C.Sharma gave me the idea for the “robot voice” mode in the Nyquist Phase Vocoder. He has created some extensions that allow you to control pitch, but has only provided code and a demo, so I can’t explain the principles or algorithm. This project would be to decipher his code (in Java and a sketch in C I think), integrate the algorithm into the cmupv library, an build an interface to the new functionality from Nyquist Xlisp/SAL.
- MQ analysis/synthesis (or the more advanced Loris system or Juan Pampin’s system)
- Max Mathews created an interesting synthesis method based on a vector (phasor) that rotates and decays incrementally on every step (sample). This is essentially a decaying partial. He leaves many of these running and simulates hitting, strumming, or bowing by adding offsets. This would be an interesting new unit generator for Nyquist.
- Support VST plugins in Nyquist.
- Allow Nyquist to import csound unit generators.
- Add a generator to Faust to output Nyquist primitives
- Support SDIF I/O (see SDIF library from UCB)
- Add support for Allegro score representation using the allegro library (written in C++).
- Extending the Nyquist IDE by writing functions in Java
- Add an “oscilloscope” window that displays waveforms as they are generated in Nyquist.
NyquistIDE Projects (in Java)
- The Nyquist IDE has plenty of rough edges. I don’t like to encourage “cosmetic coding” as a class project, but if you are an expert Java GUI programmer, there are a number of problems that could use your skills: The window layout is different on every system, but there must be a way to do a good job of window sizing and placement. Fonts are terrible on Linux. The sliders used in the Browser and other places are not compact — it would be nice to have sliders that look like those in professional audio applications. Parenthesis balancing has come a long way from the original implementation, but it could still use some improvement. In particular, when the cursor is positioned after a close parenthesis, the corresponding open parenthesis should be highlighted. Graphical object event handling in the code is inconsistent. A “design pattern” for creating and handling graphical objects to guide future implementation and rewrites would be nice.
- The EQ editor is preliminary — it should allow the user to specify the range of frequencies, the number of frequency channels, and support multiple audio channels, either locked together or independent.
- The IDE’s “piano roll” score editor is a rough start (and not enabled in the current release), and the user interface needs a lot of work, including scroll bars, labels, zoom, selection, access to attributes other than pitch and time, the ability to quantize or snap to grids, etc. A project in this area needs to make a specific interface design and implementation plan.
- Spatial positioning of sound is difficult to do in Lisp. A graphical editor might help to position sound sources in stereo or 5.1 or whatever. Especiallly interesting would be the ability to specify moving sound sources and to edit their paths.
- There have been several attempts to create a graphical score entry system. The idea is that you represent different Nyquist behaviors with different shapes and/or colors on a pitch vs. time graph. This is similar to a piano-roll editor except the shapes or colors indicate different behaviors (Nyquist functions). There may be other parameters encoded in the height, color, texture, or shape of each graphical object in the score. Each object can be edited using a property sheet, and all the properties are passed to Nyquist as parameters to the behavior, and behaviors are organized in terms of time and duration according to their (editable) graphical layout. The main idea is to provide some graphical score editing but to avoid strong ties to traditional music notation or assumptions that music is best represented in terms of pitch and time (only). Maybe this should be a superclass of the piano roll editor or the two should share an abstract superclass.
- Nyquist has some nice drum samples, but the interface for creating drum patterns is very primitive. On the other hand, it’s not easy to create a good graphical interface to edit drum patterns. It is especially important to support unconventional patterns, odd meters, etc., but not obvious how to do this in a user-friendly way.
- Speaking of drum patterns, Dan Trueman demonstrated a fascinating interface for rhythmic patterns called the Cyclotron (ICMC 2008). Although his system was real-time and interactive, his original work was a non-real-time system. Given Nyquist’s quick computation, I think it would interesting to create a Cyclotron-like interface in Java that outputs Nyquist scores.
- It would be nice to have a way to quickly examine audio files in the plot window. This would require Java to read an audio file and generate a plot, possibly allowing the user to zoom and scroll. Maybe there is a nice plot object already available. Other ideas: reading big audio files can be very slow. Probably a thread could construct an array of points to plot and hand it off to the GUI so that the GUI would not lock up while reading audio. Maybe audio could be plotted automatically whenever Nyquist closes an audio file (an option set in Preferences.)
Other
- Add support for SDIF I/O to Audio File Library
- Build system to segment a performance into individual notes, the beginning of a transcription system