top of page

Deep Learning with LabVIEW. Tutorial #2.2: Waveform Signal Regression

Updated: Nov 23, 2023


In our previous post, we spoke about waveform signal classification, where the goal was to categorize various waveform signals based on their time domain representation. Today, we shift our focus to waveform signal regression.

Regression involves the tasks of predicting or modeling the underlying patterns, characteristics, or properties of signals. Examples can be predicting signal's main frequency, amplitude, phase or other characteristic parameters. DNNs (Deep Neural Networks)can serve as a powerful tool for grasping those type of characteristic properties from input signals.

In this specific example, our focus is on predicting signal frequency using DNNs. While this problem is artificially created, it provides a valuable reference point for tackling more complex real-world scenarios. The approach taken here is similar to performing a Fast Fourier Transform (FFT) on the signal, followed by extracting its maximum value's argument (location), representing the signal's primary frequency or harmonic.

This signal regression example shares similarities with our previous signal classification example in terms of dataset generation, network training, and deployment. Therefore, the blog post about Waveform Signal Classification may help you in better understanding the topic.
This and other DeepLTK based examples can be found in Ngene's GitHub repository.


Waveform Dataset Generation

In this task, our goal is to train a model specifically focused on predicting signal frequency, regardless of other properties like amplitude, DC offset, phase, signal type, or noise level. (While it's possible to predict multiple properties simultaneously, we'll cover that in our upcoming posts.) To achieve this, we are generating a variety of signals by varying all characteristic properties. However, in our dataset, we only consider frequency as the target value. This approach results in a dataset where signals might share the same frequency but differ in other aspects such as noise level, amplitude, or phase. This step is necessary to ensure that the frequency prediction remains stable even when other parameters vary.

Waveform Dataset Generation

Training The Network

The network architecture employed here mirrors the one used in waveform classification, differing mainly in the choice of the loss function. In this context, Mean Squared Error (MSE) is used to measure the continuous difference between predicted and ground truth values.

As for performance evaluation, we rely on the loss value rather than the error rate, which is more suitable for classification tasks.

The training process is presented in the following video.


During the inference phase, we have the ability to simulate signals with diverse properties and evaluate the model's performance under various conditions. This includes altering the signal's frequency and comparing the predicted outcome with the reference data displayed in the "Frequency Chart." Additionally, we can modify other characteristics of the simulated waveform, such as its type, phase, noise level, amplitude, and DC offset to evaluate models indifference towards the variation of these parameters.

As can be seen from recorded video the model shows decent performance even at some noise levels. However, as the magnitude of noise surpasses a certain threshold, the model's accuracy drops significantly.


We discussed single output waveform signal regression, where our focus was on predicting the waveform's frequency. However, in this section, our objective extends beyond frequency prediction to include amplitude, phase, and DC offset.

Dataset Normalization

As we want to predict multiple characteristics of the waveform and we see that these characteristics span a range of values, we took the step of normalizing the outputs. This normalization process ensures that all predicted values are confined to the [-1;1] range, as can be seen in the snapshot on the right. Everything is the same when comparing with single output waveform regression.


The deployment phase reverses the normalization applied to the outputs during training. Since we provided the neural network with normalized outputs in the [-1;1] range, it predicts values within this interval. Subsequently, we reconstruct the outputs to their original value ranges to obtain the final predictions.

The Front Panel of the Inference VI offers a dynamic view where we can manipulate the waveform's properties, such as frequency, DC offset and etc.. By making these adjustments, we can observe the corresponding predicted and reference properties displayed on four distinct charts, with each chart dedicated to one specific property.


In this tutorial, we explored waveform signal regression, for both multiple output and single output regressions, using Deep Neural Networks (DNNs). Unlike signal classification, which identifies signal types, signal regression predicts specific signal characteristics. Our exploration begins with the initial phase, concentrating on the prediction of signal frequency. Subsequently, in the second phase, we demonstrate how to predict multiple characteristics of the waveform using DNNs.

We started by explaining how to generate datasets, creating waveform signals with various parameters like frequency and amplitude. Next, we demonstrated the ease of implementing the training and inference processes using DeepLTK in LabVIEW.

This example provides a foundational starting point for real-world applications. By refining the dataset reader and potentially adjusting the network topology for more complex tasks, developers can use this tutorial as a solid base for building sophisticated applications.

179 views0 comments


Rated 0 out of 5 stars.
No ratings yet

Add a rating


bottom of page