Introduction
While FAUST provides optimized wavetable-based oscillators (os.osc), understanding explicit mathematical approximations of sinusoidal functions remains valuable for several reasons: educational insight into DSP fundamentals, scenarios requiring zero memory overhead, and contexts where algorithmic synthesis is preferred over lookup tables.
Here we examines three distinct approximation strategies—Taylor series expansion, polynomial waveshaping, and rational approximation—with focus on their computational cost and practical implementation trade-offs.
The Fundamental Challenge
A phasor generates a linear ramp from 0 to 1. Converting this to a sinusoid requires either:
- Wavetable lookup (~5-6 operations: multiplication, cast, memory reads, interpolation)
- Mathematical approximation (variable cost depending on method)
Commercial synthesizers universally employ wavetables because memory (64KB for a high-quality table) is negligible compared to the performance gain. However, algorithmic approaches offer instructive alternatives.
Triangle Wave Conversion
The first processing step is converting the phasor to a triangle wave that oscillates between -1 and +1. This normalization simplifies the subsequent sine approximation.
tri(x) = (x < 0.5) * (4 * x - 1) + (x >= 0.5) * (3 - 4 * x);profile:
- 1 comparison (
x < 0.5) - 1 comparison (
x >= 0.5) - 4 multiplications
- 3 additions/subtractions
Total: ~6-8 operations (comparisons compile to conditional moves or masked operations in SIMD contexts)
This triangle function is computationally lean and avoids expensive operations like floor() or modulo that higher-level abstractions (ma.frac) might introduce.
Method 1: Taylor Series Expansion
The Taylor series for sine around zero is:
sin(x) = x - x³/6 + x⁵/120 - x⁷/5040 + ...
Naive implementation would recompute powers repeatedly. The optimized version uses Horner’s method to minimize operations:
sintaylor(x) = x * (1 - x * x * (1.0/6 - x * x * (1.0/120 - x * x / 5040)));profile:
- 1 multiplication to compute
x² - 5 additional multiplications (reusing
x²) - 3 subtractions
- Divisions by constants are precomputed by the compiler
Total: ~6 multiplications + 3 additions
Precision: With 4 terms (up to x⁷), THD (Total Harmonic Distortion) is approximately 0.01%. Adding one more term (x⁹/362880) reduces this to ~0.0001%.
Scaling factor: The triangle output is multiplied by 0.90 instead of π/2 ≈ 1.5708 to keep the argument within the convergence range where the series is most accurate. This is a practical compromise that slightly reduces amplitude but significantly improves precision.
Method 2: Polynomial Waveshaping (Minimax Approximation)
Unlike Taylor series (which converges around a single point), minimax polynomials are optimized to minimize the maximum error across an entire interval using Chebyshev or Remez exchange algorithms.
sinpoly(x) = x * (1.5708 - 0.645964 * x * x);Assembly profile:
- 1 multiplication (
x²) - 2 multiplications
- 1 subtraction
Total: ~3 multiplications + 1 addition
This is half the computational cost of the Taylor series implementation while maintaining comparable accuracy for audio applications. The coefficients (1.5708, 0.645964) are empirically derived to minimize error in the [-π/2, π/2] range.
Trade-off: Fewer terms mean this approximation degrades faster outside the optimal range, but for normalized triangle input, the error remains below perceptual thresholds.
Method 3: Bhaskara’s Rational Approximation
Bhaskara I (7th century Indian mathematician) derived a rational approximation:
sin(x) ≈ 4x(π - x) / (π² - x(π - x)) for x ∈ [0, π]
Implementation requires careful input conditioning for positive and negative polarities:
sinbhaskara(x) = (4 * x * (PI - x)) / (PI * PI - x * (PI - x));
sineoscbhaskara(x) = ((x - 0.5) * 2.0) <:
sinbhaskara(abs(_) * PI) * (((_ < 0.0) * -1.0) + (_ > 0.0)) * 0.7;profile:
- Input normalization: 2 multiplications, 1 subtraction
- Absolute value: 1 comparison + conditional
- Bhaskara formula: 6 multiplications, 3 additions, 1 division
- Sign restoration: 2 comparisons, 2 multiplications
Total: ~9 multiplications + 1 division + 4 comparisons
Characteristics:
- Division operation is expensive (~10-20 cycles on modern CPUs vs ~3-5 for multiplication)
- Maximum error is ~0.0016 over [0, π] (historically remarkable for its simplicity)
- The 0.7 scaling factor compensates for the approximation’s slight amplitude excess
Practical Considerations
When to use each method:
- Polynomial waveshaping: Best balance for real-time DSP. Minimal CPU cost with acceptable precision.
- Taylor series: Preferred when precision is critical and extra terms can be added as needed.
- Bhaskara: Historical interest and educational value. The division operation makes it less suitable for high-polyphony contexts.
The performance difference becomes critical in resource-constrained environments (embedded systems, mobile devices, or when CPU headroom is needed for filters, effects, and modulation).
Code Implementation
import("stdfaust.lib");
// High-precision π constant
PI = 3.141592653589793238462643383279502884197169399375105820974944592307816;
// Triangle wave converter (phasor [0,1] to triangle [-1,1])
tri(x) = (x < 0.5) * (4 * x - 1) + (x >= 0.5) * (3 - 4 * x);
// Method 1: Taylor series (Horner's method, 4 terms)
sintaylor(x) = x * (1 - x * x * (1.0/6 - x * x * (1.0/120 - x * x / 5040)));
// Or more
sintaylor2(x) = x - (x * x * x) / 6 + (x * x * x * x * x) / 120 -
(x * x * x * x * x * x * x) / 5040;
sineosctaylor(x) = sintaylor(tri(x) * 0.5 * PI);
// out
process = os.phasor(1, si.smoo(hslider("F", 440, 20, 1000, 0.001))) :
sineosctaylor * si.smoo(hslider("G", 0.8, 0, 1, 0.001));
// Method 2: Minimax polynomial approximation
sinpoly(x) = x * (1.5708 - 0.645964 * x * x);
sineoscpoly(x) = sinpoly(tri(x) * 0.90);
// out
/*
process = os.phasor(1, si.smoo(hslider("F", 440, 20, 1000, 0.001))) :
sineoscpoly * si.smoo(hslider("G", 0.8, 0, 1, 0.001));
*/
// Method 3: Bhaskara's rational approximation
sinbhaskara(x) = (4 * x * (PI - x)) / (PI * PI - x * (PI - x));
sineoscbhaskara(x) = ((x - 0.5) * 2.0) <:
sinbhaskara(abs(_) * PI) * (((_ < 0.0) * -1.0) + (_ > 0.0)) * 0.7;
// out
/*
process = os.phasor(1, si.smoo(hslider("F", 440, 20, 1000, 0.001))) :
sineoscbhaskara * si.smoo(hslider("G", 0.8, 0, 1, 0.001));
*/Conclusion
While wavetable oscillators remain the industry standard for production environments, understanding these approximation methods provides insight into the computational mechanics of DSP. The polynomial waveshaping approach offers an excellent compromise between efficiency and precision, requiring only ~3× the operations of a wavetable while eliminating memory requirements entirely.
For educational purposes, experimenting with these methods in FAUST reveals how mathematical abstractions translate to actual CPU cycles—a critical skill for optimizing real-time audio processing systems.