Exploring Pseudo-Random and Stochastic Signals in Digital Sound Synthesis
Random and stochastic signals in synthesis can be useful for implementing time-varying oscillators and/or control signals. A common issue in digital synthesizers and audio effects is that the sounds often differ significantly from those produced in the physical world, due to the precise, time-invariant nature of signal generation in the digital domain.
In computers, time-varying details that occur unpredictably in the physics of sound must be carefully sequenced, often to the point of exhausting the resources of the computer (and the programmer).
This problem has existed since the early days of computer music. Indeed, we can think of the first examples of research in this field, such as those conducted by Max Mathews and his colleagues at Bell Labs, who studied the possibilities of sound control using nonlinear signals and algorithms in the early MUSIC-N family synthesis languages. Some examples include low-frequency noise generators such as RAN and RAH, which generate pseudo-random signals for controlling sound parameters like frequency, amplitude, etc.
Using nonlinear signals in sound synthesis and control can thus be an effective way to generate sounds that are closer to natural ones compared to those generated through more standard digital synthesis techniques, using more computationally efficient methods. Here, I will implement circuits in the Faust programming language (GRAME) to discretely represent some pseudo-random and stochastic models useful for generating control signals.
White Noise Generator
The first fundamental building block for working with random numbers is the pseudo-random number generator, also known in the Digital Signal Processing (DSP) domain as digital white noise. When a random stream of numbers is generated and reproduced at the sample level, the resulting sound is typical of white noise. White noise generators have been used in the field of computer music since its beginning. We can trace their utilization back to even Max Mathew’s 1963 article The Digital Computer as a Musical Instrument, the first known paper on computer music.
In general, a random number generator provides an N-bit binary number every time it is called. If these conditions are truly met, each bit or any subset of the bits in the numbers should also be random. However, no algorithmic random number generator completely meets all of these criteria. In fact, most random number generation algorithms are numerical functions that accept their previous output as input and generate a new output.
The initial input used when the generator is started is called the seed, and it can usually be any number except zero. If the same seed number is used on two different occasions, the series of numbers generated will also be the same.
Since the output numbers are integers with a finite number of bits, it is obvious that at some point in the sequence the seed will appear again. From this point forward, the sequence repeats itself.
An efficient random number generator will generate all or nearly all of the 2N different numbers that can be represented by an N-bit word before repeating.
Linear Congruential Generator
One of the most popular random number algorithms is the linear congruential method. The first model is attributed to the Lehmer random number generator (named after D. H. Lehmer), sometimes also referred to as the Park-Miller random number generator (after Stephen K. Park and Keith W. Miller). This is a type of linear congruential generator (LCG) that operates in the multiplicative group of integers modulo N.
The basic function is: \(R_{\text{new}} = (A \times R_{\text{old}} + C) \mod M\) where A and C are carefully chosen constants, and M is the largest possible number plus one for the chosen word length. The generator is completely specified by giving values for A, C, and the word length. For any given word length, there are values for A and C (besides the trivial ones, A = 1 and C = 1) that will give M values before repeating.
In Faust, a typical linear congruential generator following this method can be written as follows:
import("stdfaust.lib");
// Pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
process = noise(1212);
This algorithm will generate a new random value on every sample, corresponding to the sample rate of the system.
How the LCG
Works:
Initialization:
- The generator starts with an initial seed (initSeed
) that serves as the starting value for the sequence. The quality of randomness depends heavily on this seed.
Recurrence Relation:
- The generator calculates each new random value in seed
- Here:
- A
: Is the multiplier.
- C
: Is the increment.
- M
: Is the modulus, which determines the range of possible output values.
Feedback:
- The output of the lcg
function is fed back into itself through the feedback operator (~
in FAUST), allowing it to generate a sequence of pseudo-random numbers.
Scaling:
- The result is divided by M
(_ / m
) to normalize the output to the range [0, 1].
Constants in the Code:
- a
: A large, carefully chosen multiplier to ensure good randomness.
- c
: A small constant increment, often chosen to avoid patterns in the generated numbers.
- m
: The modulus, chosen as a power of two (here, \(2^{31}\)), which is common for computational efficiency.
Multiple Outputs
It is also possible to generate multiple outputs from the same linear congruential generator by adding an internal sequential operation seqN
that repeats the LCG
process in series and takes multiple taps. Here’s an example of how you can create multiple outputs:
import("stdfaust.lib");
// Noise - Linear Congruential Generator - Multiple Outputs
multinoise(N, initSeed) = ((_ + (initSeed - initSeed') :
seqN(N, (_ * a + c) % m )) ~ _ : par(i, N, _ / m))
with{
// LCG constants
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
// Sequential operations
seqN(N, OP) = _ <: seq(i, N, (OP <: _, _),
si.bus(i + 1)) : (_, !, si.bus(N - 1), !);
};
process = multinoise(4, 1212);
Low frequency Noise Generator
Since in Faust every function runs at the single sample level, our linear congruential generator will generate a new number at every sample. For this reason, if we want to use a noise generator as a low-frequency oscillator for control signals, we need to hold the values at slower rate intervals.
For this purpose, the first thing we need to do is build a sample-and-hold module that can hold a signal when triggered.
In Faust, we can write a sample-and-hold function as follows:
import("stdfaust.lib");
// a classic sample and hold
sah(x, t) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
How sah
Works:
Initial State:
- The feedback loop (
~ _
) initializes the memory (_
) to zero (or undefined, depending on implementation).
Sampling:
- When
t
(the trigger signal) becomes1
, theselector
picks the value ofx
(the input signal) and stores it in the feedback loop (_
).
Holding:
- When
t
is0
, theselector
picks the previously stored value from_
, effectively holding the last sampled value.
The feedback (~
) allows the output to retain and “remember” the last sampled value. Without it, the function would only output the input directly without holding any value.
Now that we have our SAH module, we need a clock function that can generate triggers at regular intervals. For this purpose, we need triggers that last only one sample. In fact, if we use this sample and hold function with a bandwidth larger than a single sample, the gate of the input signal will remain open depending on the duration of the band before the output is stored in our feedback circuit.
We can write a single-sample trigger as follows:
import("stdfaust.lib");
// Synchronous pulse train in HZ
metro(ms) = phasor0(1000 / max(1, ms)) : derivate + dirac
with{
// phasor that start from 0
phasor0(f) = (_ <: _ + f, _) ~ _ % ma.SR : (!, _ / ma.SR);
// Dirac Impulse at Compile Time
dirac = 1 - (1 : mem);
// first derivate
derivate(x) = x < x';
};
process = metro(100);
How the Synchronous Pulse Train (metro
) Works:
The metro
function generates a pulse train with a specified frequency (given in milliseconds).
Main Function:
metro(ms)
takes an input in milliseconds (ms) to define the pulse interval.- It converts the interval in milliseconds to a frequency in Hz using:
1000 / max(1, ms)
This ensures the frequency is always positive and at least 1 ms, avoiding divisions by 0.
Phasor Generation:
phasor0(f) = (_ <: _ + f, _) ~ _ % ma.SR : (!, _ / ma.SR);
Creates a repeating ramp signal (phasor) that oscillates between 0 and 1.
• _ <: _ + f
Increments the current phase value by f (frequency).
• ~ _
Uses feedback to store the current phase value across iterations.
• % ma.SR
Resets the phase to 0 when it reaches the Sample Rate, creating a looping behavior.
• _ / ma.SR
Normalizes the ramp signal to a range of [0, 1] dividing the output by Sample Rate.
Derivative Detection:
derivate(x) = x < x';
Detects when the phasor value resets to 0 by comparing the current value (x) with its previous value at 1 sample delay \(Z^{-1}\) = '
.
Dirac Impulse:
dirac = 1 - (1 : mem);
(mem
and '
are the same) Generates a single impulse at compile time, ensuring an initial pulse is present.
The pulse train is generated by combining the output of the derivative (derivate) with the Dirac impulse: metro(ms) = phasor0(1000 / max(1, ms)) : derivate + dirac;
where:
• phasor0 creates the ramp signal.
• derivate detects when the ramp resets.
• dirac ensures an initial pulse is present.
• The combination produces a series of synchronized 1 sample pulses.
Now that our modules are complete, we can connect them together to sample the noise input at a low sampling rate, building the low-frequency noise generator as follows:
process = noise(1212), metro(100) : sah;
Or by putting everything together in a single function:
import("stdfaust.lib");
// noise sampled with a classic sample and hold
sahNoise(seed, f) = selector(pulseTrain, _, noise(seed)) ~ _
with{
// Pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
// Dirac Impulse at Compile Time
dirac = 1 - (1 : mem);
derivate(x) = x < x';
phasor0 = (_ <: _ + f, _) ~ _ % ma.SR : (!, _ / ma.SR);
pulseTrain = phasor0 : derivate + dirac;
};
process = sahNoise(1212, 100);
This will sample random values every 100 milliseconds.
If we decide to use the same code and trigger the SAH manually, we obtain a random number generator.
A new random number is generated each time the user clicks the GUI button.
import("stdfaust.lib");
// random number generator
random(range, seed, trigger) = ((noise(seed) : abs), dirac(trigger)) :
sah * range
with{
// transform a constant to 1 sample trigger
dirac(x) = (x - x') > 0;
// Pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
// a classic sample and hold
sah(x, t) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
};
process = random(100, 1212, button("trigger"));
where abs
ensures only positive values by applying the absolute value, range
is a multiplication factor that determines the output range, extending it beyond [0, 1] to [0, range
], and dirac(x) = (x - x') > 0;
transforms a signal with a bandwidth larger than a single sample into a single-sample impulse.
Asynchronous Low frequency Noise Generator
We now have a low frequency noise generator, but the values are output at a synchronous clock rate. This means that while the sequence of values is unpredictable, the timing of updates remains consistent.
If we want to make our noise generator more complex, we can apply the principles we’ve learned so far to create an asynchronous clock for our low-frequency noise by mixing these elements together.
What we aim to achieve now is the generation of a clock with asynchronous timing, so that the output values from the sample and hold can change at unpredictable moments.
The following Faust code implements a random impulse generator, where the impulses occur at irregular intervals based on a pseudo-random number generator. The intervals are constrained to a range between ms1
and ms2
, specified in milliseconds.
// random impulse generator / ms1 & ms2 = range
randometro(seed, ms1, ms2) = randomtrigger
with{
// Pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
// Dirac Impulse at Compile Time
dirac = 1 - (1 : mem);
derivate(x) = x < x';
phasor0(f) = (_ <: _ + f, _) ~ _ % ma.SR : (!, _ / ma.SR);
pulseTrain(f) = phasor0(f) : derivate;
// a classic sample and hold
sah(t, x) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
msMin = min((1000 / max(1, ms1)), (1000 / max(1, ms2)));
msMax = max((1000 / max(1, ms1)), (1000 / max(1, ms2)));
randomtrigger = ((_ + dirac), abs(noise(seed)) * (msMax - msMin) + msMin :
sah : pulseTrain) ~ _;
};
//process = randometro(1212, 100, 4000), randometro(1234, 100, 4000);
The function randometro(seed, ms1, ms2)
takes as its input a seed value for the LCG and ms1, ms2 as the bounds of the random interval range, given in milliseconds.
msMin = min((1000 / max(1, ms1)), (1000 / max(1, ms2)));
msMax = max((1000 / max(1, ms1)), (1000 / max(1, ms2)));
This converts the input intervals ms1 and ms2 (in milliseconds) into corresponding frequencies in Hz.
The variables msMin
andmsMax
represent the lower and upper bounds, ensuring consistent interval limits derived from ms1 and ms2.
Beyond the functions we have already discussed, the core of the random impulse train is defined as:
randomtrigger = ((_ + dirac), abs(noise(seed)) * (msMax - msMin) + msMin :
sah : pulseTrain) ~ _;
The function generates a random interval using noise(seed)
, scaled to the range [msMin, msMax]
.
Here, msMin
serves as the base frequency offset, while msMax
scales the seed to adjust the range of the output values. The term (msMax - msMin)
ensures the maximum range does not exceed the intended limit, as msMin
is added as a base offset.
Next, the sah
function is used to sample and hold the random value, ensuring it remains constant during the specified interval.
Finally, the pulseTrain
function takes the random value as its frequency, varying it over time based on the newly generated random number. This output is fed back into the sah
function, which triggers the next random value and consequently alters the frequency, creating a dynamic and unpredictable sequence.
We now have a random impulse generator that we can use as a clock source for the low-frequency noise generator, making it operate at asynchronous time intervals. This can be used as a non-linear control signal.
Below is the complete code that brings it all together:
import("stdfaust.lib");
// SAH noise at random intervals / ms1 & ms2 = range
sahNoiserandom(seed, ms1, ms2) = selector(randometro, _, noise(seed)) ~ _
with{
// Pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
// random impulse generator / ms1 & ms2 = range
randometro = randomtrigger
with{
// Dirac Impulse at Compile Time
dirac = 1 - (1 : mem);
derivate(x) = x < x';
phasor0(f) = (_ <: _ + f, _) ~ _ % ma.SR : (!, _ / ma.SR);
pulseTrain(f) = phasor0(f) : derivate;
// a classic sample and hold
sah(t, x) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
msMin = min((1000 / max(1, ms1)), (1000 / max(1, ms2)));
msMax = max((1000 / max(1, ms1)), (1000 / max(1, ms2)));
randomtrigger = ((_ + dirac), abs(noise(seed * 2)) * (msMax - msMin) +
msMin : sah : pulseTrain) ~ _;
};
};
process = sahNoiserandom(1212, 100, 4000);
Stochastic Signals
In Digital Signal Processing (DSP), the concept of stochastic processes is often used to describe signals that evolve unpredictably over time. While random signals (such as noise) can be unpredictable, stochastic processes are defined by probabilistic models that describe the likelihood of different outcomes, making their behavior dynamic but not entirely predetermined.
For example, random walks and Brownian motion are both types of stochastic processes commonly used to model natural, physical randomness, and they lead to outputs that feel more connected to real-world phenomena.
Stochastic processes are useful in applications where randomness and gradual changes are required, such as in Brownian motion, fractal-based noise, and Markov chains. These processes introduce correlations between successive values, which is particularly useful for modeling behaviors that evolve smoothly rather than in abrupt, discontinuous jumps.
A stochastic process in fact refers to a system that is nondeterministic or unpredictable. It evolves in a way that can’t be precisely predicted, though it is governed by probabilistic rules. These processes are defined by their probabilistic models, which describe the likelihood of various outcomes over time, making them dynamic with behavior that is not predetermined.
On the other hand, random typically refers to something that is unrecognizable or not following any identifiable pattern. Random signals evolve in time in an unpredictable manner, and their individual values cannot be predicted with certainty. However as we observe with pseudo-random generators, the average properties of random signals are deterministic. Since total unpredictability cannot be computed, the terms “stochastic” and “random” can sometimes be used interchangeably.
A case of study of the Random Walk : the Drunk object from Max Msp
One of the simplest stochastic processes is the random walk, a term first introduced by Karl Pearson in 1905. The random walk is the formalization of the idea of taking successive steps in random directions, and it is the simplest Markov process, whose most well-known mathematical representation is the Norbert Wiener process. A Wiener process, also known as Brownian motion, is a Gaussian stochastic process in continuous time with independent increments. It is used to model Brownian motion itself as well as various random phenomena observed in applied mathematics, finance, and physics.
The term “Brownian motion” specifically refers to the erratic motion of particles small enough (with diameters on the order of a micrometer) to be unaffected by gravity, present in fluids or gaseous suspensions (such as smoke), and observable under a microscope. The phenomenon was discovered in the early 19th century by the Scottish botanist Robert Brown and was later modeled in 1905 by the German theoretical physicist Albert Einstein.
In the Max MSP programming environment, an example of Brownian motion for control signals can be found in the object drunk.
The drunk walk model is a stochastic process that nominally follows the path of a drunk person who has just left a bar. At each step, he randomly chooses to move either to the left or to the right, without knowing where he came from or where he is going. After the first step, his next move is chosen randomly, either left or right, and this pattern continues. In the mathematical model, the drunk never sobers up, and an interesting feature of this process is that the drunk tends to return to his starting point over time.
I attempted and succeeded in porting this model to Faust. Below is the Faust code that implements the random walk process.
import("stdfaust.lib");
// random walk generator : variable steps and max value
drunk(seed, maxvalue, stepsize, trigger) = noise(seed), (trigger : dirac) :
sah * (abs(stepsize) + 1) : int * (trigger : dirac) : + ~ _ :
foldInt(abs(maxvalue))
with{
// transform a constant to 1 sample trigger
dirac(x) = (x - x') > 0;
// pseudo-random noise with linear congruential generator (LCG)
noise(initSeed) = lcg ~ _ : (_ / m)
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
};
// a classic sample and hold
sah(x, t) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
// fold at max Int value and 0
foldInt(maxv, x) = maxv - abs((abs(x) % (2 * maxv)) - maxv);
};
process = (os.phasor(1, 10) - 0.1 < 0.0) : drunk(1212, 100, 10);
In this code, at every trigger input, the sample and hold noise provides an integer value between a base of 1 and a maximum value, similar to how we implement the random impulse generator. However, the key difference is that at each step, the value is held inside the integrator (+ ~ _
), and at successive steps, a sum or difference is performed between the value stored in the integrator and the new value generated.
To achieve this, we need to ensure that when a new number is generated by the sah
, it lasts only one sample (int * (trigger : dirac)
). If we don’t implement this, another operation will be automatically performed on the next sample, causing unintended results.
Finally, the foldInt
object: foldInt(maxv, x) = maxv - abs((abs(x) % (2 * maxv)) - maxv);
ensures that only values within the range of maxvalue are output. If the range is exceeded, a foldover operation is performed on the signal, wrapping the value back into the defined range.
An important aspect of creating time-varying control signals is to obtain a smooth transition between every sample. In fact, a random walk signal like the one generated by the drunk object, at the sample level in Faust, creates discontinuities in the signal. These discontinuities can lead to issues such as aliasing, or worse, an overall lack of definition in the behaviors used to control a carrier signal due to the poor temporal consistency of the signal.
An alternative way to program a random walk in Faust in response to these problems could be simpler than this last one. By changing the direction of the signal at every sample, we can impose a condition on the noise generator to produce only binary values in the range of [-1, 1]. To avoid discontinuities, we can use integration techniques such as linear interpolation or filters to smooth the signal.
randomWalk(seed, speed, smooth, trigger) = binaryNoise(seed) / ma.SR : + ~ _ : _ * speed : wavefolding : fi.lowpass(1, 1 / smooth)
with{
// transform a constant to 1 sample trigger
dirac(x) = (x - x') > 0;
// a classic sample and hold
sah(x, t) = selector(t, _, x) ~ _
with{
// binary selector
selector(sel, x, y) = x * (1 - sel) + y * (sel);
};
// pseudo-random binary noise with linear congruential generator (LCG)
binaryNoise(initSeed) = lcg ~ _ : (_ / m) : condition
with{
a = 18446744073709551557; c = 12345; m = 2 ^ 31;
lcg(seed) = ((a * seed + c) + (initSeed - initSeed') % m);
condition = _ <: (_ > 0.0) + (_ <= 0.0) * - 1;
};
// WAVEFOLDING
wavefolding = intreset <: trifunctionpos,trifunctionneg :> + : _ * 2
with{
intreset(x)= x-int(x);
triconditionpos(x) = (x < 0.5) * (x) + ((x > 0.5) * ((x * -1) +1));
trifunctionpos(x) = (x > 0) * (x) : triconditionpos;
triconditionneg(x) = (x > -0.5) * (x) + ((x < -0.5) * ((x * -1) -1));
trifunctionneg(x) = (x < 0) * (x) : triconditionneg;
};
};
process = (os.phasor(1, 100) - 0.1 < 0.0) : randomWalk(1212, 10, 2);
This last code is an example of how one can generate a non-linear, fully functioning control signal from a stochastic model like the random walk, or from other generators to obtain complex behaviors. We can address the resolution of this kind of problem by focusing on continuity, since from the RAN object in Music V. In his book on computer music (Mathews, The Technology of Computer Music), Max Mathews explains how, in the random number generator, he achieved a continuous function using linear interpolation methods.
The calibration of non-linear signal processing is an art in itself, involving a range of functional models beyond the one described here. It requires studying and refining the response of control signals and is closely tied to the type of analysis and behavior a composer or programmer seeks to achieve in their algorithm. Given the importance of this topic, it will require further focus, which I will explore in more depth in future posts.