🎸 Neural IR

SoloDallas Storm VST3

A complete staged implementation plan for capturing the SoloDallas Storm pedal as a neural network impulse response — recording 27 physical measurements, training a Temporal Convolutional Network, exporting to ONNX, and shipping a cross-platform VST3 plugin via JUCE.

🧠 The Approach: Neural IR Capture

Instead of trying to model the Storm's circuitry analytically, this project captures its sonic character through neural impulse response modeling. We record the physical hardware's response to 27 precisely controlled input conditions, train a TCN to learn the transfer function, and export a plugin that sounds indistinguishable from the hardware.

The plan is broken into 6 self-contained stages. Each stage has a clear deliverable and can be stopped and resumed independently — progress is never lost between sessions.

Hardware Target
SoloDallas Storm Pedal
Model Architecture
Temporal Convolutional Network (TCN)
Training Framework
PyTorch (MPS / GPU)
Inference Format
ONNX Runtime
Plugin Framework
JUCE (VST3 / AU / AAX)
Target Platforms
macOS / Windows / Linux

Stage 01 Data Capture & Preparation Weeks 1–2

Record 27 impulse responses from the physical Storm at various parameter settings using log chirp sweeps. The recording matrix covers the full parameter space of the pedal.

🎛️
Recording setup: Connect Storm → audio interface (24-bit/48kHz). Warm up 15 minutes before recording. Generate log chirps: 1–20 kHz sweeps (10–30 sec) using Audacity/Reaper.

Recording Matrix — 3 × 3 × 3 = 27 Files

ParameterLowMidHigh
Drive050100
Tone0 (dark)50 (mid)100 (bright)
Volume3060100

Naming Convention

storm_drive-[0-50-100]_tone-[0-50-100]_vol-[30-60-100].wav

# Examples:
storm_drive-0_tone-0_vol-30.wav      # minimum settings
storm_drive-100_tone-100_vol-100.wav  # maximum settings
storm_drive-50_tone-50_vol-60.wav     # midpointShell

Stage 1 Deliverables

Stage 02 Dataset Creation Weeks 2–3

Extract impulse responses from the recorded sweeps and prepare the data for PyTorch training. Overlapping audio chunks ensure the model learns transient behavior effectively.

01
IR Extraction
Run ir_extract.py: deconvolve sweep signals → impulse responses. Trim to 500ms. Output → data/impulse_responses/
02
Train/Val/Test Split
Split data: train (70%) / val (15%) / test (15%).
03
Chunking
Create overlapping chunks: 16KB windows with 50% overlap.
04
Output Dataset
input_audio.npy + output_audio.npy + metadata.json → data/training_dataset/
05
PyTorch DataLoader
pip install -r requirements.txt. Create DataLoader in train_utils.py.
Checkpoint #1: Commit data/ + scripts/preprocess_audio.py, ir_extract.py, create_dataset.py, train_utils.py. Dataset ready for training.

Stage 03 Model Training Weeks 3–5

Implement and train a Temporal Convolutional Network (TCN) — the standard architecture for audio modeling tasks. Dilated convolutions give the model a large receptive field with manageable compute.

Model Architecture

class StormTCN(nn.Module):
    """Base model: dilated TCN, 4 dilation levels"""
    # Receptive field grows exponentially with depth
    # Level 1: dilation=1, Level 2: dilation=2, ...

class StormConditionedTCN(nn.Module):
    """Conditioned model: accepts drive/tone/volume as side info"""
    # Injects parameter values via FiLM conditioning layers
    # Allows single model to cover all 27 recording conditionsPython

Training Configuration

SettingValue
OptimizerAdam
Loss FunctionMSE (Mean Squared Error)
Epochs50–150 (with early stopping)
Early StoppingSave best model when validation loss plateaus
Target Validation Loss< 0.01
Target ESR< 0.1 (Error-to-Signal Ratio on test set)
Checkpoint #2 — MVP Ready: Commit models/ + results/ + training scripts. Output: models/storm_tcn_best.pt + training_log.csv + results/metrics.txt. Trained model ready for export.

Stage 04 Model Export & C++ Wrapper Weeks 5–7

Export the trained PyTorch model to ONNX format and write a C++ inference class that JUCE can call at audio rate.

ONNX Export

# export_onnx.py
torch.onnx.export(
    model,
    dummy_input,
    "models/storm_tcn.onnx",
    opset_version=14,
    dynamic_axes={
        'input': {0: 'batch', 2: 'time'},   # dynamic batch & time
        'output': {0: 'batch', 2: 'time'}
    }
)

# Validate: test inference with ONNX Runtime Python
# Verify output shapes & values match PyTorch modelPython

C++ Inference Class Interface

// storm_inference.h
class StormInference {
public:
    StormInference(const std::string& model_path);

    // Process one audio chunk with parameter values
    std::vector<float> process(
        const std::vector<float>& audio_chunk,
        float drive,   // 0.0 – 1.0
        float tone,    // 0.0 – 1.0
        float volume   // 0.0 – 1.0
    );

private:
    Ort::Session session_;
    Ort::Env env_;
};C++
Checkpoint #3: Commit cpp/ + export scripts. Verify C++ code builds on macOS/Windows/Linux. Test inference produces correct output shapes. Status: C++ inference working and tested.

Stage 05 JUCE Plugin Integration Weeks 7–8

Integrate the C++ inference class into a JUCE AudioProcessor and build a minimal but functional VST3 plugin.

AudioProcessor Integration

// PluginProcessor.h
class StormAudioProcessor : public juce::AudioProcessor {
public:
    void processBlock(juce::AudioBuffer<float>&, juce::MidiBuffer&) override {
        // For each buffer chunk:
        // 1. Extract audio samples
        // 2. Get current drive/tone/volume parameter values
        // 3. Call inference.process(chunk, drive, tone, volume)
        // 4. Write output back to buffer
    }

private:
    std::unique_ptr<StormInference> inference_;
    juce::AudioProcessorValueTreeState apvts;
};C++

Parameters

ParameterRangeBehavior
drive0.0 – 1.0Maps to Storm Drive knob. Responds to parameter changes in real-time.
tone0.0 – 1.0Maps to Storm Tone knob. Dark (0) → Bright (1).
volume0.0 – 1.0Maps to Storm Volume knob. Affects output level and character.
Checkpoint #4 — MVP Plugin Ready: Commit juce/ + plugin binaries. Load in Reaper. Test audio flows. Parameters respond. Output: VST3 plugin binary (Windows/macOS/Linux). Next: UI polish & testing.

Stage 06 UI Polish & Testing Weeks 8+

🎯
Final commit: Release candidate v1.0 with complete documentation.

Reference File Structure

data/raw_recordings/
27 raw .wav from Storm + recording_log.csv
data/normalized/
Peak-normalized, silence-trimmed audio
data/impulse_responses/
Deconvolved IRs, trimmed to 500ms
data/training_dataset/
input_audio.npy, output_audio.npy, metadata.json
scripts/
All Python scripts (preprocess, dataset, training, export)
models/
storm_tcn_best.pt, storm_tcn.onnx
cpp/
storm_inference.h/cpp, CMakeLists.txt
juce/
Full VST3 project (StormVST3.jucer, Source/, CMakeLists.txt)
results/
metrics.txt, training_log.csv, test results