Audio Examples - Adafruit Circuit Playground Bluefruit

The following short Python programs demonstrate audio features of the Adafruit Circuit Playground Bluefruit (CPB) board. These use only the onboard hardware and the serial console. Each can be run by copying the program into code.py on the CIRCUITPY drive offered by the board. The text can be pasted directly from this page, or each file can be downloaded from the CircuitPython sample code folder on this site.

The CPB can play small audio files stored in the internal memory. These can be either uncompressed .wav or compressed .mp3 files, but the format is restricted to monaural (single channel) at 22 kHz sample rate. The WAV files are further restricted to a sample format of 16-bit linear PCM. Typical music files are stereo at 44 kHz sampling rate and will need to be processed before they will work. The Adafruit tutorial listed belowhas instructions for using Audacity or iTunes to resample an audio file to the correct format. There are also many other applications and command-line tools which can perform such a conversion.

Related Pages

Essential Documentation

Rooster: Playing Files

There are three files which need to be installed in the CIRCUITPY drive of the Bluefruit:

File formats:

  • Rooster.wav: mono, 22050 Hz sample rate, 16-bit little-endian linear PCM, 102166 bytes

  • Rooster.mp3: mono, 22050 Hz sample rate, 32 kbps MP3, 9509 bytes

Note that the mp3 file is less than 10% the size of the wav file, but given the limitations of the speaker any differences are imperceptible.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# cpb_rooster.py

# Demonstrate playing audio files using the built-in speaker.
# Button A plays the .mp3 file, button B plays the .wav file.

# This demo is specific to the Adafruit Circuit Playground Bluefruit board.
# This program uses only onboard hardware: pushbuttons, speaker.

# Note: this script assumes that Rooster.wav and Rooster.mp3 have been copied to
# the top level of the CIRCUITPY filesystem.

# Import the board-specific input/output library.
from adafruit_circuitplayground import cp

while True:
    if cp.button_a:
        cp.play_mp3("Rooster.mp3")
        
    if cp.button_b:
        cp.play_file("Rooster.wav")

Arpeggios: Playing Tones

Direct download: cpb_arpeggios.py.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
# cpb_arpeggios.py

# 1. Demonstrate algorithmic audio composition using the built-in speaker.
# 2. Demonstrate use of a Python class to encapsulate functionality.

# This demo is specific to the Adafruit Circuit Playground Bluefruit board.
# This program uses only onboard hardware: speaker, pushbuttons, slide switch.

#================================================================
# Import the standard Python time functions.
import time, math

# Import the board-specific input/output library.
from adafruit_circuitplayground import cp

#================================================================
# Define a class to represent the algorithmic composition task.
class Arpeggiator(object):
    
    # Define a dictionary of arpeggio patterns as a class attribute.
    patterns = { 'maj': (0, 4, 7, 12),
                 'min': (0, 3, 7, 12),
                 'maj7': (0, 4, 7, 11),
                 'min7': (0, 3, 7, 11),
                 'dim7': (0, 3, 6, 10),
                 }

    # Initialize an instance of the class.
    def __init__(self):
        # Current compositional state.
        self.root_note = 60     # middle-C as a MIDI note
        self.tonality = 'maj'   # key for the patterns dictionary
        self.tempo = 60         # beats per minute

        # Internal state variables.
        self._index = 0                        # arpeggio index of next note to play
        self._direction = 1                    # index step direction
        self._next_time = time.monotonic_ns()  # clock time in nsec for next note update

        return

    # Update function to be called frequently to recompute outputs.
    def poll(self):
        now = time.monotonic_ns()
        if now >= self._next_time:
            self._next_time += 60000000000 // int(self.tempo)   # add nanoseconds per beat

            # look up the current arpeggio pattern
            pattern = self.patterns[self.tonality]

            # Select the next note play and advance the position.
            if self._index <= 0:
                # choose the root note of the arpeggio and advance up one step
                note = self.root_note                
                self._index = 1
                self._direction = 1

            elif self._index >= len(pattern)-1:
                # choose the top note of the arpeggio and advance down one step
                note = self.root_note + pattern[-1]
                self._index = len(pattern)-2
                self._direction = -1
                
            else:
                # play either a rising or falling tone within the arpeggio
                note = self.root_note + pattern[self._index]
                self._index += self._direction

            # Compute the tone to play and update the speaker output.
            freq = self.midi_to_freq(note)
            cp.stop_tone()
            cp.start_tone(freq)
            print(f"Updating at time {now}, note {note}, freq {freq}")

    # ----------------------------------------------------------------
    # Convert MIDI note value to frequency. This applies an equal temperament scale.
    def midi_to_freq(self, midi_note):
        MIDI_A0 = 21
        freq_A0 = 27.5
        return freq_A0 * math.pow(2.0, (midi_note - MIDI_A0) / 12.0)

# ----------------------------------------------------------------
# Initialize global variables for the main loop.

# Create an Arpeggiator object, an instance of the Arpeggiator class.
arpeggiator = Arpeggiator()

# ----------------------------------------------------------------
# Enter the main event loop.
while True:

    if cp.button_a and cp.button_b:
        arpeggiator.tonality = 'dim7'

    elif cp.button_a:
        arpeggiator.tonality = 'min'

    elif cp.button_b:
        arpeggiator.tonality = 'min7'

    else:
        arpeggiator.tonality = 'maj'

    if cp.switch:
        arpeggiator.tempo = 120
        
    else:
        arpeggiator.tempo = 480

    # Run the tone generator.
    arpeggiator.poll()

Bells: Generating Note Waveforms

Direct download: cpb_bells.py.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# cpb_bells.py

# 1. Demonstration of playing synthesized audio waveforms on the Adafruit Circuit Playground Bluefruit.
# 2. Demonstrate use of a Python class to encapsulate functionality.

# This demo is specific to the Adafruit Circuit Playground Bluefruit board but
# could be easily ported to microcontrollers supporting a speaker and audio PWM I/O.

#================================================================
# Import standard Python modules.
import time, array, math

# This demonstration is compatible with the high-level library, but does not need it, it instead
# directly uses the underlying hardware modules.
# from adafruit_circuitplayground import cp

# Import hardware I/O modules.
import board
import audiocore
import audiopwmio
import digitalio

#================================================================
try:
    # Attach to the speaker enable pin.  For compatibility with the
    # adafruit_circuitplayground library, if attaching fails, assume it is
    # already attached by the cp module.
    speaker_enable = digitalio.DigitalInOut(board.SPEAKER_ENABLE)
    speaker_enable.switch_to_output(value=False)
    print("Attached to SPEAKER_ENABLE.")
except ValueError:
    speaker_enable = cp._speaker_enable
    print("Using existing SPEAKER_ENABLE.")

#================================================================
# Synthesized audio sample generator and player.  The waveform generation takes
# a noticeable amount of time so the result is cached.  The pitch of the note is
# varied by modulating the sampling rate of the playback.

class Bell(object):
    def __init__(self, length=2000, cycles=55):
        super(Bell,self).__init__()

        # save the sample table properties
        self._length = length
        self._cycles = cycles

        # create a table of audio samples as signed 16-bit integers
        start = time.monotonic_ns()
        sample_array = array.array("h", self.wave_function(length, cycles))
        end = time.monotonic_ns()
        print(f"Bell sample generation took {1e-9*(end-start)} sec")
        
        # convert to a RawSample object for playback
        self.playable_sample = audiocore.RawSample(sample_array)

        # create an audio player attached to the speaker
        self.audio_output = audiopwmio.PWMAudioOut(board.SPEAKER)

        return

    # Define a generator function to produce the given number of waveform cycles
    # for a waveform table of a specified sample length.
    def wave_function(self, length, cycles):
        phase_rate = cycles * 2 * math.pi / length
        for i in range(length):
            phase = i * phase_rate
            amplitude = 10000 * (1.0 - (i / length))
            partial1 = math.sin(2 * phase)
            partial2 = math.sin(3 * phase)
            partial3 = math.sin(4 * phase)
            yield min(max(int(amplitude*(partial1 + partial2 + partial3)), -32768), 32767)

    def play(self, frequency):
        # Start playing the tone at the specified frequency (hz).
        sample_rate = int(self._length * frequency / self._cycles)
        print(f"Using sample rate {sample_rate}")
        self.playable_sample.sample_rate = sample_rate
        speaker_enable.value = True
        self.audio_output.play(self.playable_sample, loop=False)

    def stop(self):
        if self.audio_output.playing:
            self.audio_output.stop()
        speaker_enable.value = False

    def deinit(self):
        self.stop()
        self.audio_output.deinit()
        self.audio_output = None
        self.playable_sample = None

    # ----------------------------------------------------------------
    # Convert MIDI note value to frequency. This applies an equal temperament scale.
    def midi_to_freq(self, midi_note):
        MIDI_A0 = 21
        freq_A0 = 27.5
        return freq_A0 * math.pow(2.0, (midi_note - MIDI_A0) / 12.0)

    def note(self, note):
        self.play(self.midi_to_freq(note))

# ----------------------------------------------------------------
# Initialize global variables for the main loop.

# Create a Bell object, an instance of the Bell class.
synth = Bell()

# ----------------------------------------------------------------
# Enter the main event loop.
while True:
    # Play a major fifth interval starting on concert A: A4, E5, A4
    synth.play(440)
    time.sleep(1.0)
    synth.play(660)
    time.sleep(1.0)
    synth.play(440)
    time.sleep(1.0)

    # Play a chromatic scale up and down starting with C4 (middle-C, 261.63 Hz)
    for i in range(12):
        synth.note(60+i)
        time.sleep(0.2)
    for i in range(12):
        synth.note(72-i)
        time.sleep(0.1)
    synth.note(60)
    time.sleep(0.5)

    synth.stop()
    time.sleep(4)