Audio Examples - Adafruit Circuit Playground Bluefruit

The following short Python programs demonstrate audio features of the Adafruit Circuit Playground Bluefruit (CPB) board. These use only the onboard hardware and the serial console. Each can be run by copying the program into code.py on the CIRCUITPY drive offered by the board. The text can be pasted directly from this page, or each file can be downloaded from the CircuitPython sample code folder on this site.

The CPB can play small audio files stored in the internal memory. These can be either uncompressed .wav or compressed .mp3 files, but the format is restricted to monaural (single channel) at 22 kHz sample rate. The WAV files are further restricted to a sample format of 16-bit linear PCM. Typical music files are stereo at 44 kHz sampling rate and will need to be processed before they will work. The Adafruit tutorial listed belowhas instructions for using Audacity or iTunes to resample an audio file to the correct format. There are also many other applications and command-line tools which can perform such a conversion.

Related Pages

Essential Documentation

Rooster: Playing Files

There are three files which need to be installed in the CIRCUITPY drive of the Bluefruit:

File formats:

  • Rooster.wav: mono, 22050 Hz sample rate, 16-bit little-endian linear PCM, 102166 bytes

  • Rooster.mp3: mono, 22050 Hz sample rate, 32 kbps MP3, 9509 bytes

Note that the mp3 file is less than 10% the size of the wav file, but given the limitations of the speaker any differences are imperceptible.

 1# cpb_rooster.py
 2
 3# Demonstrate playing audio files using the built-in speaker.
 4# Button A plays the .mp3 file, button B plays the .wav file.
 5
 6# This demo is specific to the Adafruit Circuit Playground Bluefruit board.
 7# This program uses only onboard hardware: pushbuttons, speaker.
 8
 9# Note: this script assumes that Rooster.wav and Rooster.mp3 have been copied to
10# the top level of the CIRCUITPY filesystem.
11
12# Import the board-specific input/output library.
13from adafruit_circuitplayground import cp
14
15while True:
16    if cp.button_a:
17        cp.play_mp3("Rooster.mp3")
18        
19    if cp.button_b:
20        cp.play_file("Rooster.wav")

Arpeggios: Playing Tones

Direct download: cpb_arpeggios.py.

  1# cpb_arpeggios.py
  2
  3# 1. Demonstrate algorithmic audio composition using the built-in speaker.
  4# 2. Demonstrate use of a Python class to encapsulate functionality.
  5
  6# This demo is specific to the Adafruit Circuit Playground Bluefruit board.
  7# This program uses only onboard hardware: speaker, pushbuttons, slide switch.
  8
  9#================================================================
 10# Import the standard Python time functions.
 11import time, math
 12
 13# Import the board-specific input/output library.
 14from adafruit_circuitplayground import cp
 15
 16#================================================================
 17# Define a class to represent the algorithmic composition task.
 18class Arpeggiator(object):
 19    
 20    # Define a dictionary of arpeggio patterns as a class attribute.
 21    patterns = { 'maj': (0, 4, 7, 12),
 22                 'min': (0, 3, 7, 12),
 23                 'maj7': (0, 4, 7, 11),
 24                 'min7': (0, 3, 7, 11),
 25                 'dim7': (0, 3, 6, 10),
 26                 }
 27
 28    # Initialize an instance of the class.
 29    def __init__(self):
 30        # Current compositional state.
 31        self.root_note = 60     # middle-C as a MIDI note
 32        self.tonality = 'maj'   # key for the patterns dictionary
 33        self.tempo = 60         # beats per minute
 34
 35        # Internal state variables.
 36        self._index = 0                        # arpeggio index of next note to play
 37        self._direction = 1                    # index step direction
 38        self._next_time = time.monotonic_ns()  # clock time in nsec for next note update
 39
 40        return
 41
 42    # Update function to be called frequently to recompute outputs.
 43    def poll(self):
 44        now = time.monotonic_ns()
 45        if now >= self._next_time:
 46            self._next_time += 60000000000 // int(self.tempo)   # add nanoseconds per beat
 47
 48            # look up the current arpeggio pattern
 49            pattern = self.patterns[self.tonality]
 50
 51            # Select the next note play and advance the position.
 52            if self._index <= 0:
 53                # choose the root note of the arpeggio and advance up one step
 54                note = self.root_note                
 55                self._index = 1
 56                self._direction = 1
 57
 58            elif self._index >= len(pattern)-1:
 59                # choose the top note of the arpeggio and advance down one step
 60                note = self.root_note + pattern[-1]
 61                self._index = len(pattern)-2
 62                self._direction = -1
 63                
 64            else:
 65                # play either a rising or falling tone within the arpeggio
 66                note = self.root_note + pattern[self._index]
 67                self._index += self._direction
 68
 69            # Compute the tone to play and update the speaker output.
 70            freq = self.midi_to_freq(note)
 71            cp.stop_tone()
 72            cp.start_tone(freq)
 73            print(f"Updating at time {now}, note {note}, freq {freq}")
 74
 75    # ----------------------------------------------------------------
 76    # Convert MIDI note value to frequency. This applies an equal temperament scale.
 77    def midi_to_freq(self, midi_note):
 78        MIDI_A0 = 21
 79        freq_A0 = 27.5
 80        return freq_A0 * math.pow(2.0, (midi_note - MIDI_A0) / 12.0)
 81
 82# ----------------------------------------------------------------
 83# Initialize global variables for the main loop.
 84
 85# Create an Arpeggiator object, an instance of the Arpeggiator class.
 86arpeggiator = Arpeggiator()
 87
 88# ----------------------------------------------------------------
 89# Enter the main event loop.
 90while True:
 91
 92    if cp.button_a and cp.button_b:
 93        arpeggiator.tonality = 'dim7'
 94
 95    elif cp.button_a:
 96        arpeggiator.tonality = 'min'
 97
 98    elif cp.button_b:
 99        arpeggiator.tonality = 'min7'
100
101    else:
102        arpeggiator.tonality = 'maj'
103
104    if cp.switch:
105        arpeggiator.tempo = 120
106        
107    else:
108        arpeggiator.tempo = 480
109
110    # Run the tone generator.
111    arpeggiator.poll()

Bells: Generating Note Waveforms

Direct download: cpb_bells.py.

  1# cpb_bells.py
  2
  3# 1. Demonstration of playing synthesized audio waveforms on the Adafruit Circuit Playground Bluefruit.
  4# 2. Demonstrate use of a Python class to encapsulate functionality.
  5
  6# This demo is specific to the Adafruit Circuit Playground Bluefruit board but
  7# could be easily ported to microcontrollers supporting a speaker and audio PWM I/O.
  8
  9#================================================================
 10# Import standard Python modules.
 11import time, array, math
 12
 13# This demonstration is compatible with the high-level library, but does not need it, it instead
 14# directly uses the underlying hardware modules.
 15# from adafruit_circuitplayground import cp
 16
 17# Import hardware I/O modules.
 18import board
 19import audiocore
 20import audiopwmio
 21import digitalio
 22
 23#================================================================
 24try:
 25    # Attach to the speaker enable pin.  For compatibility with the
 26    # adafruit_circuitplayground library, if attaching fails, assume it is
 27    # already attached by the cp module.
 28    speaker_enable = digitalio.DigitalInOut(board.SPEAKER_ENABLE)
 29    speaker_enable.switch_to_output(value=False)
 30    print("Attached to SPEAKER_ENABLE.")
 31except ValueError:
 32    speaker_enable = cp._speaker_enable
 33    print("Using existing SPEAKER_ENABLE.")
 34
 35#================================================================
 36# Synthesized audio sample generator and player.  The waveform generation takes
 37# a noticeable amount of time so the result is cached.  The pitch of the note is
 38# varied by modulating the sampling rate of the playback.
 39
 40class Bell(object):
 41    def __init__(self, length=2000, cycles=55):
 42        super(Bell,self).__init__()
 43
 44        # save the sample table properties
 45        self._length = length
 46        self._cycles = cycles
 47
 48        # create a table of audio samples as signed 16-bit integers
 49        start = time.monotonic_ns()
 50        sample_array = array.array("h", self.wave_function(length, cycles))
 51        end = time.monotonic_ns()
 52        print(f"Bell sample generation took {1e-9*(end-start)} sec")
 53        
 54        # convert to a RawSample object for playback
 55        self.playable_sample = audiocore.RawSample(sample_array)
 56
 57        # create an audio player attached to the speaker
 58        self.audio_output = audiopwmio.PWMAudioOut(board.SPEAKER)
 59
 60        return
 61
 62    # Define a generator function to produce the given number of waveform cycles
 63    # for a waveform table of a specified sample length.
 64    def wave_function(self, length, cycles):
 65        phase_rate = cycles * 2 * math.pi / length
 66        for i in range(length):
 67            phase = i * phase_rate
 68            amplitude = 10000 * (1.0 - (i / length))
 69            partial1 = math.sin(2 * phase)
 70            partial2 = math.sin(3 * phase)
 71            partial3 = math.sin(4 * phase)
 72            yield min(max(int(amplitude*(partial1 + partial2 + partial3)), -32768), 32767)
 73
 74    def play(self, frequency):
 75        # Start playing the tone at the specified frequency (hz).
 76        sample_rate = int(self._length * frequency / self._cycles)
 77        print(f"Using sample rate {sample_rate}")
 78        self.playable_sample.sample_rate = sample_rate
 79        speaker_enable.value = True
 80        self.audio_output.play(self.playable_sample, loop=False)
 81
 82    def stop(self):
 83        if self.audio_output.playing:
 84            self.audio_output.stop()
 85        speaker_enable.value = False
 86
 87    def deinit(self):
 88        self.stop()
 89        self.audio_output.deinit()
 90        self.audio_output = None
 91        self.playable_sample = None
 92
 93    # ----------------------------------------------------------------
 94    # Convert MIDI note value to frequency. This applies an equal temperament scale.
 95    def midi_to_freq(self, midi_note):
 96        MIDI_A0 = 21
 97        freq_A0 = 27.5
 98        return freq_A0 * math.pow(2.0, (midi_note - MIDI_A0) / 12.0)
 99
100    def note(self, note):
101        self.play(self.midi_to_freq(note))
102
103# ----------------------------------------------------------------
104# Initialize global variables for the main loop.
105
106# Create a Bell object, an instance of the Bell class.
107synth = Bell()
108
109# ----------------------------------------------------------------
110# Enter the main event loop.
111while True:
112    # Play a major fifth interval starting on concert A: A4, E5, A4
113    synth.play(440)
114    time.sleep(1.0)
115    synth.play(660)
116    time.sleep(1.0)
117    synth.play(440)
118    time.sleep(1.0)
119
120    # Play a chromatic scale up and down starting with C4 (middle-C, 261.63 Hz)
121    for i in range(12):
122        synth.note(60+i)
123        time.sleep(0.2)
124    for i in range(12):
125        synth.note(72-i)
126        time.sleep(0.1)
127    synth.note(60)
128    time.sleep(0.5)
129
130    synth.stop()
131    time.sleep(4)
132