Goal
The goal of this research and development project is to create a simple monophonic synthesizer system in Unity using the basic principles of audio synthesis. I have decided to name this system M.A.G.I (Mathematical Audio Generation Interface). In this blog, I will walk you through the development journey of the M.A.G.I Synth, sharing the techniques and theories I apply along the way. All my sources will be linked at the end of the blog post for extra information.
The reason I want to research this is multilayered. On the one hand, I have never done anything with audio programming before, so I saw this as an interesting opportunity to test my programming capabilities. On the other hand, I also have a passion for music. Building this type of system will allow me to combine my creative and technical side into a product I will perhaps even be able to use myself.
Challenges
Several questions need to be answered during the development of this synthesizer system. Each of these questions will represent a challenge that will need to be tackled for the system to be functional by the end of the development stage. These challenges are as follows:
- How can audio be generated in Unity using the basics of audio synthesis?
- How can the performance of the system be measured, maintained and improved upon?
- How can the principles of audio synthesis be integrated into a simple, user-friendly design?
- How can all the elements of the synth system be brought together in Unity to form a working product?
Audio Synthesis Quick Guide
Before diving into the nitty and gritty of the development process, I thought it was important to provide an audio synthesis quick guide for my readers. This section will contain basic explanations of audio synthesis theory, as well as an expansion on important terminology that will be used throughout the blog post. I strongly advise revisiting this chapter should any uncertainties arise, as it is designed to help provide you with the answers and clarifications you may require.
Definitions
- “Audio Synthesis” : Electronic production of sound where no acoustic source is used.
- “Synthesizer” : Electronic instrument that generates audio signals.
- “Monophonic”: The ability to produce only one note at a time.
- “Audio Artefact”: Any undesired sound that is generated when playing audio.
The Basic Synth Structure
As you can see in the image below, your typical synth is made up of five main parts: the oscillators, the filters, the amplifier, the global control, and the modulators. In my case, only three of these elements are relevant. Those would be the oscillators, the filters, and the modulators.

Oscillators

Synthesizers are unique in the sense that the sounds they create are generated in a completely artificial way. The base of this generation is dependent on a simple tone. This tone, otherwise known as the “noise-maker”, is called an oscillator. There are a few types of oscillators, but they all behave in the same way. Sounds are generated using oscillators by making use of a mathematical waveform and having this waveform playback. Using these waveforms, each oscillator will produce its own sound that will form the basis of the synthesizer (Lazaga, 2020). The main oscillator types are:
- Sine – the sine oscillator has a smooth and warm sounding tone.
- Square – the square oscillator is hollow sounding, with a crunchy edge to it.
- Triangle – the triangle oscillator is a slightly more aggressive version of the sine.
- Sawtooth – a rich sounding sharp tone with a lot of bite.
- Pulse – an irregular variation of the square wave.
Filters
Filters can be described as functions that take input samples, modify them in some way, and then generate output samples. There are many different types of filters. The most common include highpass filters, lowpass filters, chorus filters, and reverb filters (Mitchell, 2008). More information on the specific filters I need to apply will be discussed later.
Modulators
Modulators are a bit more complex than the other elements described previously. Modulators affect the way other modules attatched to the synth will operate. There are again many different types of modulators, but the two main ones are known as envelopes and low frequency oscillators (Lazaga, 2020).

Envelopes determine the length and quality of the oscillators’ sound. To put it into layman’s terms; an envelope determines how the sound will change in regards to volume over time. An envelope is therefore split into four main stages. The first stage is the attack stage. This stage determines how the sound rises to its maximum volume. The next stage is the decay stage. The decay stage is responsible for determining how slow the volume will drop after it reaches its max volume. Next comes the sustain stage, which determines how long a sound will stay at the same volume. And finally, the release stage. This stage calculates the time it should take for a sound to fade away (Mitchell, 2008).
Low-Frequency Oscillators, otherwise known as LFOs, are oscillators that sit at such a low frequency that they cannot be heard. Though they cannot be heard, they can change (or modulate) other aspects of the sound that is being generated. They do this by changing the base oscillator (Lazaga, 2020).
Iteration 1: The Foundation

Main Challenge(s):
How can audio be generated in Unity using the basics of audio synthesis?
Goals
My main goals for this iteration were as follows:
- Have the first iteration of the oscillation functions complete.
- Have a system in place to be able to switch between oscillation functions.
- Make it possible to change the volume (amplitude) of the oscillation functions.
- Have a system for the calculation of frequencies so that musical notes can be played.
- Make it possible to switch between musical notes using keys on a computer keyboard.
Oscillation Functions
I started the development process by designing and implementing a system that would allow me to create synth modules (oscillators) and apply them to a synth game object. To do this, I created a base scriptable object called SynthModule that contained all the necessary ingredients a synth module would need. These ingredients are based on the research that was done for the Audio Synthesis Quick Guide section. The code for that script can be found in the code block below.
public abstract class SynthModule: ScriptableObject {
protected int SampleRate;
private void Awake() {
SampleRate = AudioSettings.outputSampleRate;
}
public abstract SampleState GenerateSample(float frequency, float amplitude, float initialPhase);
}
An example of one of the inheriting modules is this implementation of a square oscillator:
public class Square: SynthModule {
[SerializeField, Range(0f, 0.8f)] private float volumeModifier;
public override SampleState GenerateSample(float frequency, float amplitude, float initialPhase)
{
var phaseIncrement = frequency / SampleRate;
var value = Mathf.Sign(Mathf.Sin(initialPhase * 2.0f * Mathf.PI));
var volume = amplitude * volumeModifier;
value *= volume;
var updatedPhase = (initialPhase + phaseIncrement) % 1;
return new SampleState(value, updatedPhase);
}
}
I completed the same steps to create scripts for the sine oscillator, sawtooth oscillator, pulse oscillator, and triangle oscillator.
Switching oscillation functions
The next step was ensuring that the user could select which type of base oscillator they wanted to use. This would give them the option of switching between them at runtime. I tackled this by creating a simple button UI as can be seen below. Each button stores a certain base oscillator and is linked to a custom unity event that adds the stored oscillator to the synth game object when the button is pressed. This allows for switching between the oscillators at runtime.

Below, you can see how the oscillator is connected to a module in the inspector.


Playing Notes
The next steps I undertook had to do with adding the ability to play notes. For this step, I had to dive into musical theory. To achieve this, I first familiarized myself with the layout of a standard piano, which consists of 88 keys arranged in a repeating pattern of white and black keys. Each key on the piano corresponds to a specific note. These notes are separated into octaves, with each octave containing 12 notes.

The interesting thing I realised whilst looking into this, was that each note can be represented as a certain frequency in hertz. The frequency of each note can even be calculated mathematically using the following formula: fn = f0 * (a)n. Here f0 is equal to the frequency of one fixed note (the base frequency). A common choice for this note is the A note right above the middle C. n refers to the number of half steps from the current note. Finally, a is the twelfth root of 2 (Suits, 2023). By applying this formula programmatically, I could calculate the correct frequency for each key and map this accordingly. This step required quite a few things to be implemented in my code base.
private float CalculateFrequency(int keyNumber)
{
var n = keyNumber - baseKeyNumber;
// here is that formula: fn = f0 * (a)n
return baseFrequency * Mathf.Pow(TwelfthRootOfTwo, n);
}
private List<float> GetAllPianoKeyFrequencies()
{
var frequencies = new float[numKeys];
for (var i = 1; i <= numKeys; i++)
{
frequencies[i - 1] = CalculateFrequency(i);
}
return frequencies.ToList();
}
The first thing I did was create a scriptable object to store the frequency map. The most important behaviour in this object can be located in the code block above. In this code, the aforementioned formula for frequency calculation is applied to a list of 88 keys. Once I had a way to calculate the frequencies, I needed to determine which keys on my computer keyboard I wanted to use, and connect each key to a certain frequency.
What I decided to do was follow the industry standard for this type of tool so that anyone who ever used my synthesizer would have a certain feeling of familiarity in terms of usability and layout. I asked a friend of mine who owns a lot of audio software to screenshot a couple of the keymaps, and the most common one we found to be used is the one shown below. This example comes from garage band. This is the exact keymap I ended up using for my white and black keys.

I took this concept for a key map and used Unity’s new input system to create key bindings for my synth. I connected those key bindings to their corresponding frequencies.
I also created a state manager to monitor and update the current state of the synth (could be playing or not playing). The most important code from that state manager can be found below.
private int KeysPressed
{
get => keysPressed;
set
{
keysPressed = value < 0 ? 0 : value;
isPlaying.Invoke(value > 0);
}
}
private void MapKeys()
{
foreach (var key in pianoKeyTable)
{
var action = InputActionMapsHelper.CreateInputActionMapStandard(inputActionMap,
key.ToString().ToLower());
action.started += _ => { KeysPressed++; };
action.canceled += _ => { KeysPressed--; };
}
inputActionMap.Enable();
}
}
Octaves
The last important thing I worked on was allowing for the switching of octaves. I did this simply by just moving the index of my frequency table by the number of semitones in an octave, which is 12.
Problems with current implementation
- There is a significant delay when playing a note.
- There are some performance issues due to generating audio at runtime.
- Design needs to be focused more on target audience.
Possible Solutions
- Buffering system with memory and performance optimization techniques.
- Better define my value proposition and target audience.
Iteration 2: Delving deeper

Main Challenge(s):
How can the performance of the system be measured, maintained and improved upon?
How can the principles of audio synthesis be integrated into a simple, user-friendly design?
Goals
My main goals for the second iteration were:
- Design and implement an audio buffering system to decrease audio latency.
- Create a first iteration of a design for the synthesizer system.
Audio Buffering
As mentioned above, the main issues surrounding my first iteration had to do with performance and audio delays. This was something that was highlighted as an improvement point for the next iteration of the product. To figure out the best way of solving this issue, I went on a quest to understand how real synths function without delay. It was during this research that I found articles and posts detailing the importance of buffering in most audio systems, and I knew I had found my solution: an audio buffering system.
The Theory of Data Buffering
A buffer is an area in the memory of a system that is used to store or hold data temporarily. To put it simply, a buffer serves as a temporary home for data that needs to be transferred from one place to another. Buffers are typically used in applications where there is a noticeable difference between the rate at which data needs to be received, and the rate at which data is processed (Buffering in Operating System – Javatpoint, n.d.). In the case of my synthesizer system, the base sounds are generated at runtime. This means that when pressing a key, the data needs to be received almost immediately. However, because the data at that moment still needs to be processed and generated, a delay occurs. By applying some kind of buffer, I could pregenerate an array of audio samples using the GenerateSample method from my SynthModule object, and store this data before a user even presses a key. This way, when a key is pressed, the pregenerated audio samples are played first while in the background a new array of samples is being generated. To simplify this: the generated samples are always one step ahead of the user.
Audio Buffering System
To say that creating a buffering system was complex would be an understatement. There were quite a lot of steps I undertook during this process, and most of these steps can get quite technical. I will do my best to break down everything as simply as possible, only delving into the depths of my code when I really need to. To aid me in explaining my system I have created a simple diagram that can be seen below. This showcases the simplified, broad overview of the buffering system used in my synthesizer.

I would like to highlight software developer, Ryan Hedgecock’s blog where he describes the basis of audio synthesis in Unity, with a specific section on generating runtime audio using native code in Unity. Though I couldn’t necessarily use his work entirely in my use case, his blog was a point of inspiration for me when starting the audio buffering system development. I will also reference his work a few times in my explanation.
StereoData
StereoData is a struct that defines and stores stereo audio data for the left and right channel. A channel refers to the position of the audio source within the audio signal. Each channel contains information about the audio source, such as the amplitude of the audio that is currently being produced (Digital Audio Concepts – Web Media Technologies | MDN, 2023). The StereoData struct stores this channel data using two floating point values for the left and the right channel. The idea to store the stereo data in a seperate data structure like a struct came from Ryan Hedgecock.
StereoDataHelper
The StereoDataHelper class is a static class that contains methods for working with arrays of StereoData objects. The main function of this class is the facillitation of converting and copying stereo data arrays into float arrays. The most important code from this class can be found in the code block below.
public static void CopyToFloatArray(this StereoData[] stereoData, float[] outputData)
{
if (stereoData == null)
throw new ArgumentNullException(nameof(stereoData));
if (outputData == null)
throw new ArgumentNullException(nameof(outputData));
if (outputData.Length < stereoData.Length * 2)
throw new ArgumentException("The output data array is too small to hold the stereo data.");
unsafe
{
fixed (StereoData* pStereoData = &stereoData[0])
fixed (float* pData = &outputData[0])
{
Buffer.MemoryCopy(
pStereoData,
pData,
outputData.Length * sizeof(float),
stereoData.Length * sizeof(StereoData)
);
}
}
This block of code describes the CopyToFloatArray method, which is used to efficiently copy data from an array of StereoData objects into an array of floats. In order to do this, unsafe code was used.

But what is unsafe code?
Unsafe code, as scary as it sounds, only really exists because of pointers. By turning on the unsafe code option in Unity, you can bypass some of the built-in security features of C#, and directly interact with memory and pointers (Hedgecock, 2022). In my case, the unsafe code is necessary for the low level memory copy operation, which helps my audio buffering system perform at maximum efficience.
AudioBufferManager
The AudioBufferManager is arguably the most important part of my audio buffering system. It contains all the necessary behaviour for managing the audio buffers for the synthesizer system. Some of the important behaviour from this class will be explained in the code blocks below.
private static ConcurrentDictionary<float, (StereoData[], (float, float))> _preloadAudioBuffers;
private static readonly float[] CurrentAudioBuffer = new float[MonoBufferSize];
private static readonly float[] NextAudioBuffer = new float[MonoBufferSize];
To start off the AudioBufferManager, three important variables were necessary: a variable for the current audio buffer, a variable for the next audio buffer, and a variable for the dictionary containing the preloaded audio data. It is important to note that quite a bit of thought went into which type of dictionary should best be used in this situation. I researched the various types of dictionaries that are available in C#, and found that the fastest dictionary was a ConcurrentDictionary. A ConcurrentDictionary represents thread-safe collection of key/value pairs that can be accessed by multiple threads concurrently (Microsoft, 2023).
The next important piece of the puzzle is the FillNextAudioBuffer method, which does exactly as the name suggests. The method fills the next audio buffer with audio data generated by given generator functions. This is crucial for producing sound in real-time using my GenerateSample method.
private static void FillNextAudioBuffer(
Func<float, float, float, (float, float)> generatorLeft,
Func<float, float, float, (float, float)> generatorRight,
float frequency,
float amplitudeLeft,
float amplitudeRight
)
{
var bufferData = _preloadAudioBuffers[frequency];
var phaseLeft = bufferData.Item2.Item1;
var phaseRight = bufferData.Item2.Item2;
for (var i = 0; i < StereoBufferSize; i++)
{
var (left, updatedPhaseLeft) = generatorLeft(frequency, amplitudeLeft, phaseLeft);
var (right, updatedPhaseRight) = generatorRight(frequency, amplitudeRight, phaseRight);
StereoAudioBuffer[i] = new StereoData(left, right);
phaseLeft = updatedPhaseLeft;
phaseRight = updatedPhaseRight;
}
_preloadAudioBuffers[frequency] = (bufferData.Item1, (phaseLeft, phaseRight));
StereoAudioBuffer.CopyToFloatArray(NextAudioBuffer);
}
The final important piece of code I would like to mention is the behaviour that allows for the switching between the current audio buffer and the next audio buffer. This behaviour is achieved by copying the data from the one buffer to the next using the built-in C# method BlockCopy. BlockCopy copies a specified number of bytes from a source array to a destination array (Microsoft, 2023).
private static void CopyAudioBuffer(float[] source, float[] destination)
{
Buffer.BlockCopy(
source,
0,
destination,
0,
MonoBufferByteSize
);
}
public static void SwitchAudioBuffers()
{
CopyAudioBuffer(NextAudioBuffer, CurrentAudioBuffer);
}
Updating Synth System
Once my buffering system was complete, I had to update my current code base to support this. This meant rewriting my synth modules and synth class to accomodate the buffering system propery.
Design
People ignore design that ignores people.
Frank Chimero
Another point of improvement of the first iteration of the project was design. More specifically, catering my design to the correct target audience. In order to do this properly, I needed to clearly define who my ideal user was, and what their needs and expectations would be. Only then could I begin creating a product that would be valueble for my target audience.
Value Proposition
To kick off this process, I went in search of the right tools to discover more about my ideal user. Luckily I had a book called ‘From Idea to Startup’ that I could use for this. In this book, a few of the chapters are dedicated to discovering your ideal customer profile, and using this profile to design a product that is relevant for this target customer. Something that became apparent whilst I was reading this book, is the immense importance of immersing yourself in the thought process of your target audience. What problems are they facing? What do they want to achieve? What do they find important? What do they value? These questions are essential for the design process (Kerkmeijer & van Zeeland, 2017 ).
One of the useful tools the book mentions to help answer these questions, is a value proposition canvas. A value propsition canvas gives insight into the activities, needs, expectations, and goals of your user. It also provides a structure that allows you to think about how you can reach your user through proper product design (Kerkmeijer & van Zeeland, 2017). I thought this would be a useful tool for my use case. In order to fill in this value proposition, research into the current market and my target audience was necessary. I looked at music software like garage band, and logic pro, as well as online music making systems like beepbox, and online sequencer. I researched articles and reviews about the user experience of these platforms, specifically what people liked and didn’t like. Then it was time for me to fill in my own version of a value proposition canvas:

Inspirations
Once I had a clearer idea of my target audience, it was time to search for inspiration. The most important questions I was asking myself here were: What do similar products look like? How do they achieve their goals and engage users? I started by looking up simple synth designs (both online and physical) and creating a mood board. This gave me a base framework of ideas that I could use to design my product.
Concept Design
Using all the information I had collected, I sketched out a concept design for the M.A.G.I system:

Bonus: Presets (Filters)
Since I had a little bit of time left over after having completed my designs, I decided to create a proof of concept of my idea for presets. I felt that this would solidify and visualise their importance better than simply reading a text would. In the sections below, I will delve into how I went about this.
Unity Filters
As mentioned in the beginning of this blog, I initially wanted to create my own filters for my synth. My idea was to program these filters mathematically and create presets of them that could be added to my synth game object. However, once researching for this step, I stumbled upon a blog post about procedural audio in Unity. Though this was not what I wanted to do, something interesting was mentioned in this post, namely Unity’s built-in Audio Filters. Then it struck me: maybe I could make use of these built-in filters somehow rather than making everything myself. To see if this was possible, I began researching the dfferent types of Unity filters, as well as the structure of the Unity Audio System.
Filter Types
Currently Unity has the following built-in filter types:
- Chorus Filter: creates a chorus-like sound by shifting the pitch and layering the sounds.
- Lowpass Filter: allows the low frequencies to pass, and ignores the other frequencies.
- Highpass Filter: allows the high frequencies to pass, and ignores the other frequencies.
- Flange: creates a swooshing sound.
- Reverb: creates a hall-like sound as if sound is coming fom far away.
- Echo: creates an echo effect.
- Distortion: creates a walkie-talkie like effect.
Each filter type has its own unique customizable inspector window where you can change how the filter sounds. The echo effect, for example, has variables to change the amount of delay and decay of the echo sound, as well as variables for the mixing of the audio.
Audio System
The Unity Audio System can be quite complex, but let me try to break it down simply. Audio in a Unity Scene is typically played through an Audio Source, which in my case is attatched to my synth game object. In order for audio to be picked up by the Audio System, at least one Audio Listener needs to present in the scene. As the name suggests, an Audio Listener listens to the audio in a given scene, and plays the sound accordingly.

When more control over the way audio sounds is desired, you can use something called an audio mixer. An audio mixer is a container of audio effects and audio channels that can be used to apply effects to audio data collectively. Within these audio mixers, you can create multiple mixer groups that each store different groups of effects (Unity Technologies, 2022 ). These groups can then be used to switch the current effects being applied to audio at runtime. For example, if you want to have sound appear more muffled when a player goes underwater, a mixer group containing distortion and echo can be used. In my case, these mixer groups could be very useful. All I would have to do is create a mixer group containing the desired filter effects and apply that to the active audio source of my synth. Then I could have a button or other UI object contain these effects so my user could switch between them at runtime.
Test Effect
To test the idea that I had, I created a test mixer group that could be activated and deactivated using a button. This effect contained a flange and an echo effect. I then added a simple UI button to the scene to test if the effect worked. Some of the important code and the inspector window for this effect can be found below.
[System.Serializable]
public class AudioMixerGroupUnityEvent : UnityEvent<AudioMixerGroup> {}
_base = GetComponent<Button>();
_base.onClick.AddListener(() => audioMixerGroupUnityEvent.Invoke(audioMixerGroup));

Once this effect was applied, this is what it sounded like when activated:
Problems with current implementation
- Sine and triangle oscillation waves have audio artifact issues.
- The UI is not always clear. Buttons do not stay highlighted when an effect or wave type is selected.
- Design has not been tested with users yet.
Possible Solutions
- Rewrite the buffering system to contain an attack phase from an ADSR envelope.
- Create UI managers for managing button states.
- Create and implement a test plan for my designs.
Iteration 3: Cleaning Up

Main Challange(s):
How can audio be generated in Unity using the basics of audio synthesis?
How can the performance of the system be measured, maintained and improved upon?
Goals
The main goals I had for this iteration were as follows:
- Implement the attack phase of an ADSR envelope to decrease audio artefacts.
- Check the overall performance of the synthesizer (focus on finding possible improvements).
ADSR Envelope
Some of the issues that I noted in the second iteration had to do with audio artefacts, specifically with the sine and triangle waves. When played, a noticeable clicking or popping sound occurs, which can form a real hindrance when using the synth. I decided it was time I tried to improve these issues.
Why the pop?
Before I could fix this issue, I did have to do some research into the cause. To explain it simply: the audio artifacts generated by sound waves are often caused by cutting off the sound wave before or after the zero crossing point. The zero crossing point is the point where the sign of a math function goes from negative to positive or vice versa (Steinberg Media Technologies GmbH, 2018). To prevent audio artefacts, the sound wave should ramp down/up the audio once the zero crossing point has been achieved.
This is visualised in the two images below. Here the zero crossing point is the line running horizontally. As you can see in the first image, a click will appear if the sound wave is cut off abruptly when it’s not at the crossing point. However, if the audio is either stopped or the amplitude of the audio is ramped up/down when this happens (depending on the position on the graph), then no clicking sound will occur.


To fix this, some type of method is necessary that will apply a gradual change to the volume of the generated sample over time when the zero crossing point has been reached. Luckily, in my research at the beginning of the project, I had already come across something designed to aid this issue: an ADSR envelope. For more in-depth information on ASDR Envelopes, see the Fast Guide chapter.
Implementing the fix
To implement an ADSR Envelope, I needed to create a class to store my envelope code, and I needed to update my buffering system to support this behaviour.
Before starting the implementation, I first looked at my buffering system again. I knew that I was going to have to implement quite a few changes to this system in order to get the envelope code to work. Upon further inspection, I also discovered that implementing a whole ADSR envelope with an attack, decay, release, and sustain phase would be a lot of work. Perhaps too much work even for the current scope. So, I started to think about what parts of the ADSR envelope I really needed to prevent as much of the audio artifacts as possible.
I concluded that the most important part that I needed to implement was the attack phase. Most of the audio artefacts occur when a key is initially pressed, which is during the attack phase. To do this I first created a static class to store the attack of the envelope:
public static class EnvelopeManager
{
public static float SetAttack(float currentAmplitude, float targetAmplitude, float attackTime, float phase)
{
if(currentAmplitude < targetAmplitude)
{
currentAmplitude += attackTime * phase;
}
return currentAmplitude;
}
}
I then updated my buffering system to make this code work. The changes I made were quite extensive, so I won’t go into it too much. But to cut a long story short: I needed to access the amplitude and phase data of my generated audio buffer, so this was added to the functionality of the system. I then updated the methods for filling the audio buffer to implement the SetAttack method:
for (var i = 0; i < StereoBufferSize; i++)
{
_currentAmplitude = new StereoData(
EnvelopeManager.SetAttack(
_currentAmplitude.LeftChannel,
_currentTargetAmplitude.LeftChannel,
_currentAttackTime,
_currentPhase.LeftChannel
),
EnvelopeManager.SetAttack(
_currentAmplitude.RightChannel,
_currentTargetAmplitude.RightChannel,
_currentAttackTime,
_currentPhase.RightChannel
)
);
// here the rest of the code
}
Ensuring the performance
Since I changed a lot of elements in my buffering system, I needed to ensure that it was still performing properly. There are a few ways I can analyse the performance of the system:
- The frame rate of the Unity Scene.
- The DSP loading rate (represents the amount of performance the audio is using).
- The memory usage of the native code.

Here is a screenshot of the current overall performance while running the Unity Scene and playing some sound effects. As you can see the frame rate sits comfortably between 400 and 500 frames per second. This is a great performance for a system of its calibre. Even with frame dropping when running on lesser specs, the frame rate should stay high enough to ensure seamless performance.
The DSP loading rate is at around 6%. This is moderately taxing for performance, but upon researching online, I found that as long as it stays below 15%, the performance should be good for most laptops and even some modern phones. This was good news as well.
I also went ahead and ran the memory profiler in Unity, which measures the total memory usage of a Unity Scene while in game mode. In the screenshot below, you can see the overall performance of the system divided into categories. The interesting category here for my system is native since my system uses native C# for doing low level memory operations and filling the audio buffers.

The current usage of the entirety of the native code is 90MB. However, the native code of the Unity Editor is also added into this amount. So I zoomed in on my specific scripts, and found their usages to be minimal:


It should be noted, however, that when running the buffering system on lesser hardware, the performance does take a slight dive. This will have to be looked into in the next iterations.
More Presets
Since I had already set up the foundation for creating audio presets in the previous iteration, I could now create and implement new presets in the synth system quickly and efficiently. I decided to implement the following presets into the system:
- Cosmic: This preset is designed to replicate the environment of outer space, enhancing the auditory experience with echo and reverb to create a sense of vast emptiness.
- Haunted: This preset should evoke the eerie and haunting sounds reminiscent of a classic church organ.
- Electric: This preset is configured to emulate the distinctive sound of an electric guitar.
- Quiver: This preset utilizes flanger effects to introduce an unconventional sounding, almost shakey, tone.
I implemented these presets, applying effects and experimenting along the way. However, my work took an unexpected turn when a significant flaw surfaced. Instead of describing it, allow me to demonstrate it to you.
But fair warning, put the sound of this video down before you play it if you are wearing headphones.
More audio artefacts! However, this time caused by something a little more difficult for me to fix: the mathematical nature of the sine and triangle waves. Because sine and triangle waves are fundamental wave types, they can have quite a sharp-sounding tone and overall sound. This is okay when the waves are played isolated. When combined with effects like chorus or reverb, however, the sharpness of these waves can cause a lot of unwanted sounds to be generated.
Luckily, I had a different solution in mind. My new plan was to connect a certain preset to a predefined wave type. This way I could pick the most suitable wave type for the most suitable effect, and prevent any undesirable audio issues from occurring. I did this by updating my Activate Module Button script to include the activation of an audio mixer group. For the standard sound types like square sawtooth, etc, I activate the main mixer group which only contains the amplitude of the sound mixer. For the effects, the audio mixer group with the corresponding effects is activated. The bright side of this new system is that all my UI buttons can now be managed using the same script.

Bonus: Audio Visuals
I had some time left over at the end of the sprint, so I decided to make a start on the audio visuals system that I was planning on implementing in my final design. When I was designing my synth system I was looking at many audio visual examples. The ones I liked the most, and included in my final design, looked a lot like the image I have displayed below.

To make this type of audio visuals system, I was going to require the following elements:
- A script for accessing the spectrum data in my Unity Scene.
- Some type of modular system for creating and managing audio visual effects.
- An audio visual effect for randomly changing the colour of a game object on beat.
- An audio visual effect for changing the scale of a game object on beat.
Accessing Spectrum Data
Spectrum data refers to the current audio data representing the frequency spectrum of an audio signal. So to put it simply, the spectrum data gets the frequency data currently being generated by the audio samples being played. This frequency data can be used to create visual effects while the audio source is playing.
To store the spectrum data, I had to use the built-in Unity method GetSpectrumData to create an array for this data:
public static float SpectrumValue { get; private set; }
private void Start()
{
_spectrumData = new float[settings.SpectrumSize];
}
private void Update()
{
AudioListener.GetSpectrumData(_spectrumData, 0, settings.FftWindow);
AudioListener.GetSpectrumData(_spectrumData, 1, settings.FftWindow);
if(_spectrumData is {Length: > 0})
SpectrumValue = _spectrumData[0] * settings.EffectMultiplier;
}
Modular System
The next thing I did was create a base Audio Effect class to store the general behaviour that all my audio effects should have. This is comparable to the system I designed for the oscillation functions in iteration one. I also created scriptable objects for storing the spectrum data and general audio visual settings. The following important elements are stored in these objects:
- float values for bias, effect duration, and total time (these three values determine the overall look and feel of the visual effect.
- an integer value for the spectrum size (this determines the amount of spectrum data to analyse at one go). By changing this amount, the visual effects will get more/less detailed. This also has an impact on performance.
- a value for storing the type of fft (fast fourier transform) window. This is a mathematical function that helps improve the accuracy of the frequency calculations. It is particularly useful in the case of non periodic signals.


Effects
The two effects I needed to make were:
- An effect that would randomly change the colour between a set of defined colours depending on the frequency data.
- An effect that would randomly change the x and the y scale of an object depending on the frequency data.
Scale Effect
To create a scale changer I created an audio effect called “Scale Effect”. This script changes the scale of a game object using coroutines and lerping functions, using random scale values from a predefined range.
The IEnumerator named “ScaleCoroutine” that can be seen below gradually changes the scale of a game object over time. It calculates the interpolation between the current scale and the target scale using Vector3.Lerp. The process continues until the current scale matches the target scale, with time-based updates, and then it resets the scale to its initial state.
private IEnumerator ScaleCoroutine(Vector3 newScale)
{
var timer = 0f;
while (_currentScale != newScale)
{
_currentScale = Vector3.Lerp(_currentScale, newScale, timer / settings.Duration);
timer += Time.deltaTime;
transform.localScale = _currentScale;
yield return null;
}
transform.localScale = scaleSettings.MinSize;
}
Colour Effect
To achieve the colour changing effect, I created another audio effect called “ColourChanger”.
The ColourChanger script changes the color of a SpriteRenderer component in response to changes in the spectrum data. In the Start method, it selects a random color from an array that has been defined and sets it as the initial color. The IEnumerator “MoveToColour” that can be seen below gradually transitions the color of the SpriteRenderer toward a target color using linear interpolation. The script then randomly selects a new color on each beat, smoothly transitioning to it using a coroutine.
private IEnumerator MoveToColour(Color target)
{
var timer = 0f;
var currentColor = _spriteRenderer.color;
while (currentColor != target)
{
currentColor = Color.Lerp(currentColor, target, timer / settings.Duration);
_spriteRenderer.color = currentColor;
timer += Time.deltaTime;
yield return null;
}
}
Now all I had to do was add the effects I wanted to the correct game object and select the corresponding scriptable objects:

Here is me playing around in the synth system with all the fixes and new features added:
Problems with current implementation
- There is still some odd behaviour with the buffering system. It currently does not perform optimally on every hardware system and with every set of specs.
- Audio visuals can be improved upon. Right now its not always clear that its showcasing the frequency data of the synth and maybe some other options for effects can be created for experimentation.
- The notes system has not yet been implemented, and the code for the frequency table does not support it.
Possible Solutions
- Rewrite some parts of the code in complete C code to communicate directly with the hardware. Load this code into the Unity project as a DLL.
- Rewrite audio visuals system to perform optimally for my use case. Specifically look into frequency band types and how this can be useful.
- Rewrite frequency table to contain behaviour for the printing of the musical notes using some type of dictionary system to store the frequency value (float) and the corresponding note (string).
Iteration 4: Performance

Main Challange(s):
How can the performance of the system be measured, maintained and improved upon?
Goals
My goals for this iteration were:
- Create a DLL for the oscillation functions to improve the overall performance of the system.
- Update the audio visuals to better showcase the frequency data.
- Design and implement a system for showcasing the musical notes that are being played.
- Polish the overall design of the synth system.
Synthesizer Native
As highlighted earlier in this blog post, achieving optimal performance is a critical aspect of my project, particularly with the buffering system. When I encountered a suboptimal performance of the synthesizer on lesser computer sytems, and noticed audio latency problems in specific situations, it became evident that I needed to take action. But what was the right solution?
My initial thought was to incorporate low-level C code for the oscillation functions rather than relying on C# code. The reason behind this choice is that C offers more direct control over hardware resources, typically resulting in more optimized and efficient code (provided you have the necessary expertise). The primary challenge with this solution was figuring out how to integrate C code into my Unity project while managing its loading and unloading. Luckily, I stumbled on an article detailing how this could be handled. More on this later.
What is a native plugin?
After reviewing both the article mentioned earlier and Unity’s documentation, it became evident that I needed to create a native plugin for my oscillation functions. A native plugin can be described as follows:
A software module or extension that is integrated into a host application that is intended to enhance the functionality or performance of the host application (Lutkevich, 2021).
In simpler terms, the native plugin would consist of C code containing all the necessary methods for my oscillation functions. This plugin could be imported into my C# files and utilized like any other plugin. Another term often used for this type of plugin is a dynamic link library (DLL). A dynamic link library comprises small programs that can be loaded to perform specific tasks, precisely what I required for my C plugin (Lutkevich, 2021). This led to the creation of my native plugin synthesizer_native. For those interested in the complete code for this library, you can find it in the GitHub repository linked here: https://github.com/RoachieBoy/synthesizer_native
The C code
In the following sections, we will delve into the most noteworthy aspects of the synthesizer_native library’s codebase. We’ll start with an exploration of the essential code contained within the header file.
In C, header files are used for function, variables, and macros decleration (Jackson, 2023). The code snippet presented below is extracted from the header file of the synthesizer_native library. This snippet defines a structure called sample_state_t and introduces the generate_sine_sample function. The sample_state_t struct is essentially a container that holds the sample and phase data for each oscillation function. The generate sine method is an example of one of the generation methods that I have written in C. It’s important to note that this procedure has been reiterated for each of the core oscillator functions.
typedef struct {
float sample;
float phase;
} sample_state_t;
SYMBOL_EXPORT sample_state_t generate_sine_sample(float frequency, float amplitude, float initialPhase, float sampleRate);
// here the same for all other oscillation functions
In the next code snippets, some of the noteworthy code from the C file is showcased. The first code snippet defines the full generate_sine_sample method. This function is responsible for computing and returning a sample of a sine wave, mirroring the functionality of the corresponding C# method. An important thing to note here is that this generate_sine_sample method is linked to the generate_sine_sample method in the header file. In fact, the method in the header file refers to this method. In order for this link to work properly, the methods both have to have the same name, and the same parameters. A method for each core oscillation function has been written in the C file.
sample_state_t generate_sine_sample(float frequency, float amplitude, float initialPhase, float sampleRate) {
float phaseIncrement = frequency / sampleRate;
float sample = amplitude * sinf(initialPhase * 2 * PI);
sample *= amplitude;
calculate_updated_phase(initialPhase, phaseIncrement);
return (sample_state_t) {sample, initialPhase};
}
// here methods for all other oscillation waves
In the next code snippet, the ping_pong function is shown. This function was necessary for the calculation of the triangle wave. In C#, this function is built-in. In C, however, I has to write my own version. This function ensures that a value oscillates back and forth between 0 and a max value in a periodic manner.
static float ping_pong(float value, float max) {
float modulated = fmodf(value, 2 * 1.0);
if (modulated > 1.0) {
return 2 * 1.0 - modulated;
} else {
return modulated;
}
}
Unity Implementation
The final step of the DLL implementation, was ensuring I could use this code in my Unity project. This is where that aforementioned article came in handy. Using Unity’s built-in invocation services, I am able to call functions from my synthesizer_native library. I can do this by using the attribute [DllImport(DllName, CallingConvention = CallingConvention.Cdecl, ExactSpelling = true)], which provides information to the C# runtime about which functions you want to import from the specified DLL.
private const string DllName = "synthesizer_native";
[DllImport(DllName, CallingConvention = CallingConvention.Cdecl, ExactSpelling = true)]
public static extern SampleState generate_sine_sample(
float frequency,
float amplitude,
float initialPhase,
float sampleRate
);
Here is how the current implementation of my sine wave looks in my Unity project:
protected override SampleState GenerateSample(float frequency, float amplitude, float initialPhase)
{
return SynthesizerNative.generate_sine_sample(frequency, amplitude, initialPhase, SampleRate);
}
Performance Test
To assess the system’s performance, I utilized Unity’s integrated audio latency analyzer. To enable this feature, I created a custom editor for the script responsible for audio behavior, specifically, my Synth script.
[CustomEditor(typeof(Synth))]
public class AudioEditor : Editor {}
When I played my synth now, I could see that all audio was being generated every 10 to 13ms, with an average of 11ms (see image below).

When I performed the same test on a lesser system, the performance decreased slightly and sat at around 20 – 25 ms. This was a nice increase in performance for the system.
Audio Visuals redone
As mentioned previously, the first iteration of my audio visuals was not performing optimally. The main issue was that I randomised too much of the behaviour. If you remember what I mentioned in the previous iteration, I wrote a script for generating random colours and a script for randomly updating an objects’ scale on beat. However, for the case of a synthesizer system, a user wouldn’t want everything to be a random. Rather, a user would want a clear representation of the frequency data (or at least as clear as possible). I decided that I needed to rewrite my audio visuals system to contain behaviour that you would be more likely to find in audio software.
I began scouring the internet for examples in the hope that someone had already made a system like this in the past. I really needed a guiding hand through this process. Luckily I found a series of YouTube videos of someone who had done something similar. These videos aided me in forming the foundation of my audio visuals system.
Important Changes
The most important changes to the audio visuals system take place in the newly created script FrequencyAnalyser. This is based on a script with the same name from the tutorial I mentioned earlier. The main job of the FrequencyAnalyser script is to gather audio frequency data using Unity’s built-in GetSpectrumData method, and to store that data for usage in various audio visual effects. Some interesting code snippets will be explained next.
In the code block below, code from the FixedUpdate method has been provided. In the FixedUpdate method, the audio frequency data is collected and processed. Using _audioSource.GetSpectrumData(_samples, 0, fftWindow) I obtain the spectrum data. For each data point in the spectrum, I compare it with the previous data stored in an array called_sampleBuffer. If the current data is greater, I update the buffer with the new data value; otherwise, I smoothly decrease it using Mathf.Lerp. This results in a smooth audio visualization. Finally, the UpdateFreqBands8() and UpdateFreqBands64() methods are called to process the visualisations for two different types of frequency band amounts.
_audioSource.GetSpectrumData(_samples, 0, fftWindow);
for (var i = 0; i < _samples.Length; i++)
if (_sampleBuffer[i] < _samples[i]) _sampleBuffer[i] = _samples[i];
else _sampleBuffer[i] -= Mathf.Lerp(_samples[i], _sampleBuffer[i], Time.deltaTime * smoothDownRate);
UpdateFreqBands8();
UpdateFreqBands64();
The frequency bands are specific ranges or segments of the audio frequency spectrum. They determine the detail of the audio visualisation. More segments being more detailed. To use the correct amount of frequency bands in the scripts of the audio visualisation effects, the following method was written:
public float GetBandAmount(BandTypes bands, int index)
{
return bands switch
{
BandTypes.Eight => _freqBands8[index],
BandTypes.SixtyFour => _freqBands64[index],
_ => 0
};
}
This method fills the correct array with the frequency data depending on the chosen amount of frequency bands.
Test Effect
To test the new audio visual system, I created an effect reminiscent of an FM Spectrum Line. This dynamic visualization responds to audio input, displaying frequency bands in an almost graph-like way. Below, I have added a reference image for an FM Spectrum Line.

To create this effect, I created a script called SpectrumLine. The main function of this script is manipulating the positions of certain points on a line based on frequency data. The most important behaviour of the SpectrumLine script can be found in the code block below:
for (var i = 0; i < _line.positionCount; i++)
{
var xPos = i * _spacing;
// Get the current y position from the audio data and scale it
var yPos = FrequencyAnalyser.GetBandAmount(FrequencyBandCount, i);
// clamp the yPos value to stay within a maximum value
yPos = Mathf.Clamp(yPos, MinimumYPosition, maxLineLengthHeight);
var newYPos = Mathf.Lerp(_line.GetPosition(i).y, yPos, Time.deltaTime * smoothRate);
// If there's no data, smoothly interpolate the y position to 0
if (yPos <= MinimumYPosition) yPos = newYPos;
var pos = new Vector3(xPos, yPos, 0);
_line.SetPosition(i, pos);
}
This code iterates over the points on the spectrum line and updates the positions of these points on the y-axis using the FrequencyAnalyser.GetBandCount method described previously. It also ensures that the y-position is clamped within a given range, to ensure that the spectrum line stays within view at all times. Below is a snapshot of the audio visuals in action:

Musical Notes System
In my next phase, I tackled the task of adding musical notes to the synthesizer system. The goal was to create a visual representation of the musical notes and scale currently in use. Since I didn’t have much prior knowledge of musical theory or specific musica lnotes, I had to dive into this world to get a grasp of what I needed for this task.
A (not so) deep dive into musical theory
As mentioned previously, a piano is made up of white keys and black keys. Below I will explain how the notes for these keys work musically.

White keys: The white keys, also called natural keys, are the foundation of a piano. They are named after the first seven letters of the alphabet. These notes in the order C,D,E,F,G,A,B repeat sequentially across the entire length of the piano, with the next C representing the next octave (Plus, 2022).
Black keys: The black keys are the flats and sharps of the notes neighbouring them. For example, the black key beween the C and D keys is a C#/Db (pronounced c-sharp-b-flat). These black keys allow for the inclusion of semitones or half steps in the musical scale (Plus, 2022).
Implementation
To implement this logic into my current project, I had to undertake a few steps:
- Create some sort of data structure to store the notes C-B. I eventually settled on an Enum.
- Update my frequency table (currently a
IReadOnlyList<float>) to also store data for the notes. - Write some kind of
CalculateNotemethod to calculate the current note of the key being pressed. - Implement some kind of string event to print the note to the Unity scene.
To update my frequency table I decided to turn it into a frequency dictionary. I did this by having the FrequencyTable script inherit from the built-in type IReadOnlyDictionary<float, string>. The floats here being the frequency values, and the strings being the notes for the visualisation. In this script, I added the method for calculating the note. This method can be found in the code snippet below.
private string CalculateNote(int keyNumber)
{
var noteValue = (int) baseNote + keyNumber - baseKeyNumber;
var octaveValue = baseOctave;
// when the note is higher than B that the enum starts from A again
while (noteValue > (int) Notes.B)
{
noteValue -= (int) Notes.B;
octaveValue++;
}
// when the note is smaller than C the note value is increased.
while (noteValue < (int) Notes.C)
{
noteValue += (int) Notes.B;
octaveValue--;
}
return $"{((Notes) noteValue).ToFormattedString()}{octaveValue}";
}
This method uses a base note, base key number, and base octave as reference points. It handles note transitions between octaves, iterating from C to B using while loops. The result is a formatted string representing the musical note with its associated octave.
To have the note updated and printed in my Unity Scene, I made use of a custom string event. This custom string event is set to update a text object automatically when a value change for my note string is detected.
Miscellaneous Fun
Some of the other things I worked on were more design/UI related. Here is a summarised look at these elements:
- I implemented text effects (using vertex and mesh manipulation) like vertex wobble, character wobble, and word wobble for the custom effect buttons to make them more playful and to attract the eye of the user.
- I added a custom clef note cursor with a trailing music sheet material to make the interaction in the synth system more fun.
- I decided to implement sound effects for the selection of an effect and when switching an octave (using UI buttons). This was a point that was brought up during the user testing in the previous iteration.
- I updated the post-processing effects and background of the Unity Scene for a more clean, polished design.
Problems with current implementation
- The scaling of the UI doesn’t work properly. UI objects move as the screen size changes.
- The structure of the Audio Visuals system does not yet support the possibility of switching between two effects.
- The UI button sounds have a slight delay because of the way most UI sounds are recorded.
- The overall structure of the UI needs to be reinvisioned.
Solutions
- Make use of UI anchors to help prevent UI objects from moving.
- Redesign the core of the Audio Visuals to use a similar concept of switching between things at runtime that was applied for the oscillation functions.
- I should trim the UI sounds slightly to prevent that small delay.
- I can implement my concept design to get more structure into my UI layout.
Iteration 5: Wrapping things up

Main Challange(s): How can all the elements of the synth system be brought together in Unity to form a working product?
Goals
The main goals I had for the final iteration were:
- Have the final version of the audio visuals done.
- Complete the final touches to the polish of the system.
Finishing Audio Visuals
The first thing I worked on during the last iteration were the final touches to the new audio visuals system. As I touched upon briefly in the previous chapters, I still needed to implement behaviour that allowed for switching between audio visual effects. I had a vision of allowing the user to choose between the original idea I had (the bars that move up and down), and the FM spectrum line. To make this possible, a few new elements needed to be added to the audio visuals system. Below I have provided a diagram containing a rough overview of the final implementation for the audio visuals system. I will use this to briefly explain the changes I made.

I first needed to rewrite the bars effect from scratch. This is because the effect was initially randomised like I explained in iteration 4, and I now needed it to better represent the frequency data. I followed a similar step by step for the bars effect as I did with the spectrum line. The final result for the bars effect will be shown at the end of this section.
The next thing I implemented was an abstract class called AudioVisualEffect. In this abstract class I defined all the methods and properties an audio visual effect would require. I updated both my FM spectrum line and my bars effect to inherit from this script. Once this step had been completed, I created a script for managing the entirety of the audio visuals system (AudioVisualEffectManager) and I created a script for activating a given audio visual effect when a button is clicked (ActivateAudioVisualEffect). I applied the scripts to the necessary game objects and voila! The audio visuals were up and running.
Final polish
There were a couple of elements that I needed to finalise for the polish of the M.A.G.I system:
- I needed to make and implement UI backgrounds for the oscillation and preset buttons.
- I needed to give the oscillation and preset buttons a fitting name.
- I needed to finalise the colours and overall look of the system.
- I needed to make sure that the display of the note screen and the display of the audio visuals got a somewhat transparent background to make them stand out a bit more.
- I still needed to fix the slight audio delay on the UI sounds.
- Add a button for quitting the application.
- Make sure everything looks good in a build.
Once all the elements for the polish were completed, this is what the final product of the system looked like:
Conclusion
The journey from coming up with the idea for the M.A.G.I Synth System to finishing it has been a fascinating and educational experience. I ventured through each iteration, doing my best to keep the main goal of this research and development project at the back of my mind. And, excuse the pun, I think I struck the right notes in the end. As I mentioned at the start of the blog, my main goal was to “create a simple monophonic synthesizer system in Unity using the basic principles of audio synthesis”. Reflecting on the finished product, I can confidently say that this goal has been achieved. I have a fully functioning monophonic synthesizer that uses oscillation, filters and modulation like I initially wanted. Plus, I even had some time left to work on a few other nice elements here and there like audio visuals. All in all, I would call this a succesful project.
For more infomation on my sources, please see the following chapter. And most importantly, for those of you who stuck it out and made it through this mammoth of a blog post: thanks for coming along for the ride!
Resources
Hedgecock, R. (2022, August 7). How to stay SAFE with UNSAFE code in C#. [Blog post]. https://blog.hedgecock.dev/2022/how-to-stay-safe-with-unsafe-code/
Hedgecock, R. (2022, June 21). (Part 1) Runtime Audio Generation in the Unity Engine – Creating Simple Sounds. [Blog post]. https://blog.hedgecock.dev/2022/unity-audio-generation-simple-sounds/
Hedgecock, R. (2022, June 3). (Part 0) Runtime Audio Generation in the Unity Engine – Fundamentals. [Blog post]. https://blog.hedgecock.dev/2022/unity-audio-generation-fundamentals/
Lazaga, C. (2020). Sound Synthesis 101: Synth Basics. AudioMunk. https://audiomunk.com/
Software synthesis. (2008). BasicSynth. https://basicsynth.com/index.php?page=default
Buffering in Operating System. (n.d.). javatpoint. https://www.javatpoint.com/buffering-in-operating-system
Gonkee. (n.d.). The math behind music and sound synthesis [Video]. YouTube. https://www.youtube.com/watch?v=Y7TesKMSE74
Digital audio concepts. (2023, July 4). Web media technologies | MDN. https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Audio_concepts
Krumhansl, C. L. (1995). Music Psychology and Music Theory: Problems and Prospects. Music Theory Spectrum, 17(1), 53–80. https://doi.org/10.2307/745764
MY.GAMES. (2023, October 19). Creating, connecting, and optimizing native plugins for Unity. Medium. https://medium.com/my-games-company/creating-connecting-and-optimizing-native-plugins-for-unity-59fee94b5f3d
Technologies, U. (n.d.). Unity – Manual: Native plug-ins. https://docs.unity3d.com/Manual/NativePlugins.html
Jackson, M. (2023, April 18). Header Files Directives in C – Muiru Jackson – Medium. Medium. https://medium.com/@muirujackson/header-files-directives-in-c-546db6ee56e3
Lutkevich, B. (2021, November 18). dynamic link library (DLL). SearchWindowsServer. https://www.techtarget.com/searchwindowsserver/definition/dynamic-link-library-DLL
Plus, V. a. P. B. S. M. (2022, March 24). Learn how to read sheet music: Notes for music. Take Note Blog. https://blog.sheetmusicplus.com/2015/12/30/learn-how-to-read-sheet-music-notes/
City of Melbourne Libraries. (2021, May 9). Creating audio reactive visuals using Unity 3D – Tutorial 1 [Video]. YouTube. https://www.youtube.com/watch?v=uwCjzUTpR1E
Microsoft. (2023). ConcurrentDictionary Class (System.Collections.Concurrent). Microsoft Learn. https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2?view=net-7.0
Microsoft. (2023). Buffer.BlockCopy(Array, Int32, array, InT32, Int32) method (System). Microsoft Learn. https://learn.microsoft.com/en-us/dotnet/api/system.buffer.blockcopy?view=net-7.0
Kerkmeijer, Sabine & van Zeeland, Natalie (2017). Van idee naar startup. Boomhoger onderwijs.
Technologies, U. (n.d.). Unity – Manual: Audio Filters. https://docs.unity3d.com/Manual/class-AudioEffect.html
