Updated: Oct 4
We initiated our series of blog posts with the simplest examples achievable using Deep Neural Networks, serving as an introductory segment on integrating these networks within LabVIEW. This initial phase allowed us to understand the fundamental features of DeepLTK and observe the essential fundamental concepts for this section.
Now, we're starting a new series of blog posts focused on projects that are related to the real-life applications. Building upon the knowledge we've gained from our previous blog posts, we'll discuss practical scenarios that can be effectively addressed using the DeepLTK toolkit."
For understanding the basics of Deep Learning projects using DeepLTK, read our recent blog posts and examine this and simple examples from Ngene's GitHub repository.
In today's discussion, we'll explore waveform signal classification. We'll be working with a variety of waveform signals, namely sine, square, triangle, sawtooth and noise, as depicted in the image below. The objective is to accurately classify the type of each waveform signal using Deep Neural Networks.
Waveform Dataset Generation
For this demo we will be using a synthetic dataset, which contains 5 different types of waveforms. The dataset generation process is implemented in "WaveFormDataGenerator.vi". Block Diagram of this VI is provided below.
The idea here is to generate a large enough number of waveform samples for each type with large variability in their features, such as frequency, amplitude, phase, etc, as we are aiming to obtain a model which should be able to predict the type/class of the waveforms in this conditions.
The VI provides means for specifying the sampling information, i.e. sample rate and number of samples in a single waveforms.
The ranges for feature variabilities are to be defined with help of constants.
Once all the waveforms in the dataset are generated together with their class indices/labels, the labels are converted from categorical format to One-Hop encoded one. This is required as the predictions from the model are expected as class probabilities, and the loss as calculated based on comparison of class probabilities with label represented in one-hot encoded format.
DeepLTK accepts different types of datasets which need to be presented in a specific format. The format assumes the provision of DVR to inputs and outputs of the dataset as well as the describing information. Higher level "Generate_WF_Class_Dataset.vi" accepts the inputs from two instances of "WaveFormDataGenerator.vi" (one for Training and another for Test datasets) and represents them in a conforming format (NN_DataSet(In1D_Out1D).ctl).
Main Training VI
Once we have the datasets generated we can proceed to standard steps of the training process, namely model creation, loss and training process configuration, training, evaluation, etc. These steps are described in more details in our previous blog posts.
The block diagram of the training VI is presented below.
The training process is presented in the following screen recording video.
Moving on to the inference phase in the project, we can play by simulating different waveforms and see how the model performs.
The UI allows to control the variables within specified ranges, and by changing these variables we can see how the model predicts waveform's class. One can see that the model performs pretty accurately when the variables are chosen from the range/s which have been used during training dataset generation, but when the variable goes out of the training range/s, the model's accuracy decreases. Of course one can expect the model to generalize for samples out of training distribution, but it is not guaranteed. So for real-world applications the more reliable approach would be augment the dataset with more diverse samples, i.e. with more variability.
In this blog post we covered a waveform classification problem, where we demonstrated how to train a model for predicting 1-dimensional waveform class based on its time domain samples.
Probably this is a first discussion of more or less real-world related problems. More real-world examples to come next. So stay tuned!