Time Delay based on Fundamental Frequency

For this assignment, I created a system which took the fundamental frequency of the audio signal and (after scaling it) used it to set a time delay. This same time delay is also used to control how often the fundamental frequency is sampled to set the next time delay.

The result is a new signal which stutters, especially around areas with large variation in fundamental frequency. The stuttering effect is compounded in this particular recording (Sidney Bechet’s “Si Tu Vois Ma Mère”, from Midnight in Paris) because of call and response, especially around 1:38 of the SoundCloud link.

https://soundcloud.com/bobbie-chen/bechet-stutter-1

The beginning to 3:13 is the original recording in the left channel and the output of the system in the right channel; 3:14 to the end is the output only. I tried to include the original recording as reference, but SoundCloud removed it for copyright reasons. Here is a YouTube link.

The Max patch is below.