A complete staged implementation plan for capturing the SoloDallas Storm pedal as a neural network impulse response — recording 27 physical measurements, training a Temporal Convolutional Network, exporting to ONNX, and shipping a cross-platform VST3 plugin via JUCE.
Instead of trying to model the Storm's circuitry analytically, this project captures its sonic character through neural impulse response modeling. We record the physical hardware's response to 27 precisely controlled input conditions, train a TCN to learn the transfer function, and export a plugin that sounds indistinguishable from the hardware.
The plan is broken into 6 self-contained stages. Each stage has a clear deliverable and can be stopped and resumed independently — progress is never lost between sessions.
Record 27 impulse responses from the physical Storm at various parameter settings using log chirp sweeps. The recording matrix covers the full parameter space of the pedal.
| Parameter | Low | Mid | High |
|---|---|---|---|
| Drive | 0 | 50 | 100 |
| Tone | 0 (dark) | 50 (mid) | 100 (bright) |
| Volume | 30 | 60 | 100 |
storm_drive-[0-50-100]_tone-[0-50-100]_vol-[30-60-100].wav
# Examples:
storm_drive-0_tone-0_vol-30.wav # minimum settings
storm_drive-100_tone-100_vol-100.wav # maximum settings
storm_drive-50_tone-50_vol-60.wav # midpointShell
data/raw_recordings/recording_log.csv tracking peak levels for each filedata/normalized/Extract impulse responses from the recorded sweeps and prepare the data for PyTorch training. Overlapping audio chunks ensure the model learns transient behavior effectively.
Implement and train a Temporal Convolutional Network (TCN) — the standard architecture for audio modeling tasks. Dilated convolutions give the model a large receptive field with manageable compute.
class StormTCN(nn.Module):
"""Base model: dilated TCN, 4 dilation levels"""
# Receptive field grows exponentially with depth
# Level 1: dilation=1, Level 2: dilation=2, ...
class StormConditionedTCN(nn.Module):
"""Conditioned model: accepts drive/tone/volume as side info"""
# Injects parameter values via FiLM conditioning layers
# Allows single model to cover all 27 recording conditionsPython
| Setting | Value |
|---|---|
| Optimizer | Adam |
| Loss Function | MSE (Mean Squared Error) |
| Epochs | 50–150 (with early stopping) |
| Early Stopping | Save best model when validation loss plateaus |
| Target Validation Loss | < 0.01 |
| Target ESR | < 0.1 (Error-to-Signal Ratio on test set) |
models/storm_tcn_best.pt + training_log.csv + results/metrics.txt. Trained model ready for export.Export the trained PyTorch model to ONNX format and write a C++ inference class that JUCE can call at audio rate.
# export_onnx.py
torch.onnx.export(
model,
dummy_input,
"models/storm_tcn.onnx",
opset_version=14,
dynamic_axes={
'input': {0: 'batch', 2: 'time'}, # dynamic batch & time
'output': {0: 'batch', 2: 'time'}
}
)
# Validate: test inference with ONNX Runtime Python
# Verify output shapes & values match PyTorch modelPython
// storm_inference.h
class StormInference {
public:
StormInference(const std::string& model_path);
// Process one audio chunk with parameter values
std::vector<float> process(
const std::vector<float>& audio_chunk,
float drive, // 0.0 – 1.0
float tone, // 0.0 – 1.0
float volume // 0.0 – 1.0
);
private:
Ort::Session session_;
Ort::Env env_;
};C++
Integrate the C++ inference class into a JUCE AudioProcessor and build a minimal but functional VST3 plugin.
// PluginProcessor.h
class StormAudioProcessor : public juce::AudioProcessor {
public:
void processBlock(juce::AudioBuffer<float>&, juce::MidiBuffer&) override {
// For each buffer chunk:
// 1. Extract audio samples
// 2. Get current drive/tone/volume parameter values
// 3. Call inference.process(chunk, drive, tone, volume)
// 4. Write output back to buffer
}
private:
std::unique_ptr<StormInference> inference_;
juce::AudioProcessorValueTreeState apvts;
};C++
| Parameter | Range | Behavior |
|---|---|---|
drive | 0.0 – 1.0 | Maps to Storm Drive knob. Responds to parameter changes in real-time. |
tone | 0.0 – 1.0 | Maps to Storm Tone knob. Dark (0) → Bright (1). |
volume | 0.0 – 1.0 | Maps to Storm Volume knob. Affects output level and character. |