Skip to content

fmiotello/fm-icebreaker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FM Icebreaker

A polyphonic FM synth web application inspired by Elektron Digitone and Yamaha DX7

To use the synth connect to this website using Google Chrome.

What is FM Synthesis

Frequency Modulation Synthesis (or FM Synthesis) is a form of non linear sound synthesis which encompasses an entire family of techniques in which the instantaneous frequency of a carrier signal is itself a modulating signal that varies at audio rate. This type of synthesis can be used to obtain an extremely wide range of different sounds with a small number of parameters, since the non-linearity allows to largely enrich the input signal's spectrum.

For the implementation of this synth, the FM is intended in terms of Phase Modulation Synthesis, in which the modulated signal does not affect the frequency directly, but only the istantaneous phase. Both carrier and modulator are called operators and there can be more than two of them.

The expression of a modulated signal is :

  • s: output signal
  • A: amplitude of the modulated signal
  • α: carrier frequency
  • β: modulating frequency
  • I: modulation index

The modulating index quantifies the amount of spectral enrichment obtained modulating an operator with another, i.e. the side frequencies added to its spectrum. If I = 0 the modulator doesn't affect the modulated spectrum: the higher I, the higher is the number of side frequencies percieved.

The ratios between operators, instead, define the reciprocal positions of the harmonics of the generated sound with respect to the carrier: if they are all integer numbers then it will be harmonic, while, if at least one of them is non integer, the result will be an inharmonic sound.

The generated sound is affected as well by the envelope of each modulator, which doesn't modify it's amplitude, but only it's timbre and spectral bandwidth: the overall amplitude depends on the carrier's one.

Usage

The synth has four default operators operator_A operator_C operator_D , and two output busses output_x & output_y that can be differently mixed with a crossfade to obtain the output.

Keyboard

It is possible to control the synth using a Midi keyboard connected to your computer. You can expand the sound control capabilities using the pitch-wheel.

In addition to this it's also possible to control the synth by using the computer's keyboard which is mapped in the following way:

Operators B/C/D

Operators are the core of an FM synth. An envelope is connected to operators B/C/D, in order to modify the modulation index I and thus alter the spectrum. Their parameters can be changed from this module:

  • AMT: Changes the modulation amount of the operator
  • DLY: Changes the delay of the modulation envelope of the operator
  • ATK: Changes the attack of the modulation envelope of the operator
  • DEC: Changes the decay of the modulation envelope of the operator
  • SUS: Changes the sustain of the modulation envelope of the operator
  • REL: Changes the release of the modulation envelope of the operator
  • RTO: Changes the ratio of the frequency of the operator with respect to the fundamental

Operator A - Outenv

An output envelope controls the overall amplitude time evolution of the sound. Along with this, some other parameters in this module allow to further modify the output timbre of the synth:

  • RTO: Changes the ratio of the frequency of the operator A with respect to the fundamental
  • ATK: Changes the attack of the output envelope
  • DEC: Changes the decay of the output envelope
  • SUS: Changes the sustain of the output envelope
  • REL: Changes the release of the output envelope
  • DET: Changes the frequency deviation of all the operators
  • MIX: Changes the crossfade mix between channels X and Y

Global - FX

It's possible to change some global parameters and add reverb and delay to the obtained sound:

  • OFK: Changes the operators feedback
  • TIM: Changes the delay time
  • DFK: Changes the delay feedback
  • DLY: Changes the delay send
  • SIZ: Changes the reverb size
  • REV: Changes the reverb send
  • VOL: Changes the output volume

Config

The synth allows you to save the sound you have achieved as a preset and reload it whenever you want:

  1. To save the preset click on save preset after having specified its name: this locally downloads a file with the values of the parameters
  2. To load one of your presets click on load preset and choose between the ones you saved locally

From the drop down menu of this module you can choose among the conneced Midi devices to control the synth.

Spectrogram

Since in FM synthesis, a small change of the parameters can radically affect the spectrum, a visual reference is useful to sculpt the wanted sound. For this reason we provided the interface with a spectrogram, to visualize the real time spectral content of the output signal (taken before the fx bus). This is a further hint on how to tune the FM parameters longing for a certain sound.
In order to fill the spectrogram data, an FFT of 2048 samples is performed the over the frames exctracted using an hanning window. It is possible to calculate the minimum frequency difference needed to discriminate two sinusoids. In particular:

  • Fs: sampling frequency
  • L: shaping factor of the window (= 4 in the case of hanning window)
  • M: window length

Considering a sampling frequency of 44.1kHz, and with our window choice, Δf is more or less 86Hz.

Sound Features

From the graph above it's possible to see in real time what are the attributes of the sound obtained from the FM synthesis. In this way it is possible to have an hint on how to tune the parameters in order to achieve a specific sound quality. FM synthesis in fact is as powerful as difficult to master: being able to obtain a very high variety of sounds with such a low number of parameters, it is often hard to predict the output's timbre. This module of the synth is ment to simplify this process.

To achive this goal audio features analysis is involved: an audio feature is a measurement of a particular characteristic of an audio signal, and it gives insight into what the signal contains. Audio features can be measured by running an algorithm on an audio signal that will return a number, or a set of numbers that quantify the characteristic that the specific algorithm is intended to measure.

Before exctracting the features, the audio is windowed and divided into frames of the same lenght using a hanning window. Then a 2048 samples long FFT is performed for each frame and the descriptors are extracted through a specific algorithm. The feature descriptors are three:

Noisiness
It describes how noisy a sound is: the higher the stochastic component, the noisier the sound will be. The descriptor used to compute this feature is the spectral flatness also known as Wiener entropy.

Richness
It describes how rich and dense is the spectrum of a sound: the more harmonics it has, the higher the spectral richness will be. The descriptor used to compute this feature is the unitless centroid, extracted from the spectral centroid scaled with the highest note, in order to reduce a frequency dependence.

Inharmonicity
It describes how the overtones are arranged along the spectrum. This is not an audio feature but it is inferred from the parameters. It is computed depending on both the ratio of each operator and the detune values: non-integer ratios contribute to the increasing of inharmonicity, since harmonics with a non-integer ratio with respect to the fundamental generate an inharmonic sound. High values of detune contribute to the increasing of inharmonicity as well for obvious reasons.

Meyda, which implements a selection of standardized audio features, was used for the extraction of spectral flatness and spectral centroid.

Algorithms

The way the operators are arranged is called algorithm and it defines, together with the parameters' values, the type of sound generated. There are eight algorithms available for this synth:

Algorithms_new

The solid lines represent the modulations between the operators, while the dotted ones represent the output signal path. A line which loops on a operator represents feedback: it can be used to add, together with an high modulation index I and a short envelope, a stochastic component. This is implemented by adding a delay node in the self modulation path and all the operators in a feedback configuration share the same feedback amount.

Architecture

The structure of the synth can be described by the following block diagram:

The audio engine is made of the operators arranged accordingly to the chosen algorithm. Then they are summed and their amplitude is controlled by a single output envelope. At this point the signal is forked to two effect busses (delay and reverb) which are finally summed to the main one to obtain the output.

The audio engine has a polyphony of 4 voices, each one of them has a particular structure based on the chosen algorithm. The following one refers to the first algorithm:

Other details

This application was developed using Javscript: Web Audio API is at its core. For the sake of efficiency it was decided not to use the standard AudioNode class to implement the operators, because of its limitations in terms of real time computation. We instead built a custom module using the provided AudioWorklet interface.

In particular an operator is built using an AudioWorkletNode and all the processing is carried out by the AudioWorkletProcessor. This solution allows the script to be exectuted in a separate audio thread to provide very low latency audio processing.
Moreover, to reduce the latency and the complexity even more, the sin function computation, used to produce the audio signals, was implemented using a lookup table. This allowed to overcome some of the limitations of Web Audio API and thus provide the 4 voices polyphony and a quite smooth application.

The envelopes used throughout the application are based on the Fastidious-envelope-generator. This is an envelope generator for the Web Audio API. Head to its linked GitHub repository for reference. In addition to the features it provides, we added a delay, that specifies a time amount after which the envelope is triggered.

Reverb and Delay effects are implemented using Tone.js. This is a well known and widely used audio framework that wraps and expands the Web Audio API, in order to do more, writing less code. The interconnection betweend standard AudioNode and ToneAudioNode is natively allowed by Tone.js, just assigning it the same AudioContext used in the application.

Finally, the project structure and code organization was mostly influenced by the DX7 Synth JS. This have been the main inspiration for our work. Thank you.

Parameters Range

Name Val Min Val Max Default
Env B Amount (AMT) 0 3 1
Env B Delay (DLY) 0 1.5 0
Env B Attack (ATK) 0 0.8 0
Env B Decay (DEC) 0.03 1 0.2
Env B Sustain (SUS) 0 1 0.2
Env B Release (REL) 0.03 1.3 0.6
Op B Ratio (RTO) 0.5 12 1
Env C Amount (AMT) 0 3 0
Env C Delay (DLY) 0 1.5 0
Env C Attack (ATK) 0 0.8 0
Env C Decay (DEC) 0.03 1 0.2
Env C Sustain (SUS) 0 1 0.2
Env C Release (REL) 0.03 1.3 0.6
Op C Ratio (RTO) 0.5 12 1
Env D Amount (AMT) 0 3 0
Env D Delay (DLY) 0 1.5 0
Env D Attack (ATK) 0 0.8 0
Env D Decay (DEC) 0.03 1 1.2
Env D Sustain (SUS) 0 1 0.2
Env D Release (REL) 0.03 1.3 0.6
Op D Ratio (RTO) 0.5 12 1
Op A Ratio (RTO) 0.5 12 1
Env Out Attack (ATK) 0 1 0.01
Env Out Decay (DEC) 0.03 1 0.3
Env Out Sustain (SUS) 0 1 0.3
Env Out Release (REL) 0.03 1.3 0.1
Detune (DET) 0 0.2 0
Out XY Mix (MIX) 0 1 0
Operators Feedback (OFK) 0 3 0
Delay Time (TIM) 0 1 0.4
Delay Feedback (DFK) 0 1 0.4
Delay Send (DLY) 0 1 0
Reverb Size (SIZ) 0.4 1 0.8
Reverb Send (REV) 0 1 0
Out Volume (VOL) 0 0.12 0.06

References

Some reference material used for this project:

  • Chowning, J. (1973, September 1). "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation". Journal of the Audio Engineering Society. Volume 21 Issue 7. 526-534.

  • Chowning, J. M., & Bristow, D. (1986). "FM Theory & Applications: By Musicians For Musicians". Tokyo: Yamaha Music Foundation.

  • Avanzini, F., & De Poli, G. (2012). "Algorithms for Sound and Music Computing".

  • Schubert, E., & Wolfe, J. (2006). "Does Timbral Brightness Scale with Frequency and Spectral Centroid?". Acta Acustica united with Acustica. Volume 92. 820-825.

Notes

This application was developed as a project for the "Sound Analysis, Synthesis and Processing" course at Politecnico di Milano (MSc in Music and Acoustic Engineering).

Luigi Attorresi
Federico Di Marzo
Federico Miotello