Updated: 5 days ago
Recently we have released a new version of Deep Learning Toolkit which among other new features also supports the acceleration of training and inference on Nvidia GPUs. This is a feature that our customers have been long waiting for, as problem-solving usually takes too much time when executed on CPUs. Some tasks such as image recognition of real-world objects are practically unfeasible to be solved on CPUs as it would take years to obtain more or less acceptable results. Not to forget that it would require several iterations of the training until an appropriate network architecture and hyper-parameters could be found.
We have simplified the process of enabling GPU acceleration.
In this blog post, we will describe all necessary details to have the LabVIEW project ready for executing Deep Learning Toolkit on GPUs. Please note that the instruction description can also be found in the toolkit manual.
CUDA drivers and installation
To start the project please make sure that CUDA and CUDNN (from Nvidia) are installed on the development machine. These are the necessary drivers for the GPU and also rebuilt software to be integrated and run for accelerating DL operations by the toolkit. At the moment of publishing this blogpost, the required versions of CUDA and CUDNN are following:
For DeepLTK v5.1
For DeepLTK v4.0
For DeepLTK v3.0
Note: CUDNN driver installation instructions can be found at Nvidia's web site:
Note: It is important to restart LabVIEW after GPU driver installations.
We are continuously updating the toolkit and trying to support the latest versions of CUDA and CUDNN drivers. So please check the help of the currently installed version of the toolkit to ensure you have the necessary drivers installed.
Checking GPU driver installation
In order to check whether CUDA and CUDNN drivers are correctly installed and LabVIEW projects sees correct version of the drivers, DeepLTK provides API VI to check that. Navigate to "Addons>>Ngene>>Deep Learning>>GPU" and use "NN_Check_GPU_Drivers.vi" in an empty VI.
Note: It is possible to find teh VI from Quick Drop by pressing "CTRL+SPACE" and typing "NN_Check_GPU_Drivers.vi".
Open the front panel of the VI and press the Run button. Now it should show all the available versions of installed GPU drivers on the system as well as the versions loaded by LabVIEW.
Enabling GPU acceleration in LabVIEW projects
By default, the GPU acceleration is disabled in the toolkit and it should be enabled in the Conditional disable Symbols in the Project or target (My Computer) properties.
As shown in the picture below you just need to add the symbol "NNG" with the value "TRUE". Set it equal to "FALSE" to disable GPU acceleration.
Selecting GPU for execution from Toolkit's API
Now that GPU acceleration is enabled it is only needed to select the appropriate hardware for the execution in the toolkit panel.
This step is quite simple - you just need to select the proper "NN_Device" on network creation. And that's it. Just run your model as you would when executing on the CPU. Voila, everything is done by the toolkit!
Now it’s time to see if it is really worth it. In order to compare the benefits gained by using the GPU accelerator, just measure the time spent on running the network training or inference before and after enabling it.
To provide you with some solid idea on how the results can be improved, we have presented the training performance time comparison for MNIST handwritten digit recognition problem.
As you can see GPUs can speed up the inference process up to 25 times, depending on the CPU and GPU models.
GPU Enabled Examples
The toolkit also includes several examples, which are designed to be run on a GPU, so you can start easily running and testing the performance.