I wrote this series after attending a course on Digital Signal Processing at my university. I will try not to go into too much detail. Thus there are many essential aspects that I deliberately don’t cover (such as complex numbers). You should be able to follow along even if you don’t have a technical background. Afterwards, if you find it interesting, I would recommend you to attend a DSP course at your university.

In this last part of a series of articles on Fourier transform, we will bring theory into practice by building our own digital guitar tuner for iOS in Swift (fig. 1).

The code listings that are provided in this article are limited. Only code that is specific to this application is provided. A download link to the entire project, including everything that is necessary to run the application, in a zip archive is provided at the end of this article.

## 1. Getting Started

We will use AudioKit[1] to do the heavy lifting of gathering output from the built-in microphone and analyzing the data. I can’t stress this enough: always use frameworks and libraries for the computational heavy lifting. Do not ever implement those algorithms on your own, except for educational purposes of course.

In order to add AudioKit to your iOS app, you will want to use CocoaPods[2]. You can install it from your command line by running sudo gem install cocoapods.

After you created a new Xcode iOS project, run pod init in the directory of your project. Then in your Podfile, uncomment use_frameworks! and add pod 'AudioKit', '~> 2.2' between the first target '...' do and end. Finally, run pod install. See fig. 2 for a recording.

## 2. Frequency Tracker

We start by capturing input from the microphone. Fortunately, AudioKit makes it really easy to implement this part. Add @import AudioKit to the top of your Swift file.

Note that the AKAudioAnalyzer implements the algorithm that we examined last week. Fortunately, using AudioKit makes it really easy because fig. 3 is all the code we need for this.

The AKAudioAnalyzer has a property that contains the measured frequency. We will poll for this property and update the UI accordingly. The number of updates per seconds is a matter of choice. I choose to update each 100ms, which is 10 times per second. We initialize and schedule a NSTimer using its one-shot constructor in fig. 4.

What we want to do now is to match the frequency we track using the microphone, with the nearest tone and octave.

## 4. Tones and Octaves

In music, there are 12 notes in a chromatic scale. Each note starts with a letter (A-G) and optionally an accidental (sharp or flat). Although there are multiple naming conventions, this is the (most commonly used) English and Dutch convention. Each octave consists of those 12 notes. In fig. 5 I provide a reference table for the frequencies of each note in each octave.

Obviously, you don’t want to hardcode these frequencies into your application. For one, you may want to adapt your app later to support custom standard pitches (e.g. $A_4$ = 435 Hz). For this reason, we need a formula instead that, given a note (C to B) and an octave (2 to 6 seem reasonable for guitars) can compute the corresponding frequency. Fortunately, there is.

Let’s look at octaves first. $A_2$, $A_3$, $A_4$, etc. are the same note, but have a different frequency. Precisely, each octave starts at twice the previous frequency. Does that make sense? Absolutely, let’s look at an example. In fig. 6 we can see that each higher tone fits exactly twice in the lower tone.

So if we can get a different octave by multiplying with $2^\text{n}$ where $\text{n}$ is the number of octaves, we can reason what we need to do to get a different note in the same octave. Remember that there are 12 notes, so moving one octave up is a multiplication with $\text{n}=\frac{12}{12}$. The notes within one octave are evenly distributed (as shown in fig. 7). So if we want to get from A to A♯ all we have to do is multiply with $\text{n}=\frac{1}{12}$ (or $\sqrt[12]{2}$).

In Swift, I implemented the notes using enumerations (fig. 8). The pitch is implemented using a class that has instance variables for the note, octave and computed frequency based on the standard pitch (A440), multiplied with $\sqrt[\text{x}]{2}$ where $\text{x} = 12\text{o} + \text{n}$, $\text{n}$ and $\text{o}$ are the relative note and octave respectively (so this may result in a negative number). In fig. 9 I show some code for implementing the Pitch class.

Finally, to find the nearest pitch, I simply map each pitch to a tuple that contains the difference between the pitch frequency and the queried frequency. Then, all I have to do is return the first item in the resulting array. I show how to do this in fig. 10.

## 5. Rotating Knob View

Although the user interface of the guitar tuner app is really minimalistic, some thought went into creating the rotating knob view. You will notice that it consists of several separate layers, some of which rotate.

Both dash layers, the arrow layer and the text layers are grouped in another layer that is rotated. That layer I call the turn layer. Only the rectangular “stable” layer is fixed and does not rotate so it is not placed in the turn layer.

### 5.1. Dash Layers

The first two layers, the thin and thick dash layers, we can get with a simple CAShapeLayer.

In iOS, for some reason the total circumference is expressed in 720 units. For the thin dash layer we want 120 dashes and each one should be 0.5 unit. Therefore, the dash pattern is $(\frac{1}{2}, \frac{720}{120} - \frac{1}{2})$. In other words: 0.5 painted segment and 5.5 unpainted segment. The dash phase is half the painted segment so that it is centered.

### 5.2. Arrow Layer

The arrow layer is also a simple CAShapeLayer with its path set to a triangle.

In fig. 15 I have adapted some code from the repository that initializes the arrow layer.

### 5.3. Stable Layer

The rectangular stable layer is actually the easiest of all. In fig. 16 I show how to initialize it.

### 5.4. Pitch Text Layers

Lastly, we still have to add text layers around the knob for each pitch that is near the tracked frequency.

Then, every time the tracked frequency changes, we update the pitch labels to present the nearest pitches. In Pitch.swift I overloaded the + and - operators so if we want to get the next or previous pitch we can simply do pitch + 1 (see fig. 18). Note that the offset ranges from -2 to 2 and 0 is the big label at the top.

### 5.5. Rotating the Knob

Every time the tracked pitch updates, we compute the distance between the nearest pitch and the tracked frequency in Tuner.swift. We then divide that distance by the total difference between the nearest pitch and the second nearest pitch in order to express it as a percentage. Additionally, we multiply the thing by two in order to get the knob angle between -0.5 and 0.5. In fig. 19 I show how to do these computations.

In fig. 20 I show how to use an affine transform to rotate the group layer and the text layers.

## 6. Wave Display View

The wave display view consists of a plot view with 5 graphs and a gradient mask that fades out horizontally at both ends.

Each plot has a different multiplier from 1.0 to -1.0 and a different opacity that ranges from 1.0 to 0.2. This creates a great looking effect. In fig. 21 I show how to do that.

### 6.1. Plot

We subclass AKAudioPlot to implement a custom plot. In bufferWithCsound we generate a new samplebuffer based on a sine of the tracked frequency and amplitude.

First we setup the float buffer with a C call from Swift (fig. 23). Note that 4 is the sizeof a float in C (at least it is safe to assume that it is on the architectures we are targeting).

Then we are going to fill it with num floats by using a for-loop and sampling a sine at each iteration (fig. 24).

Now we multiply the results with a sine of 0.5 Hz to fade out the horizontal ends and we multiply it with a power to make the plot look more dramatic at the right side and less symmetric (fig. 25).

Finally, at the end of each iteration we fade smoothly to the next frequency and amplitude that we track using the microphone (fig. 26).