top of page

DeepLTK

Deep Learning Toolkit for LabVIEW

DeepLTK is an award-winning product designed to empower researchers and engineers with intuitive and powerful tools to develop, validate and deploy deep learning-based systems in LabVIEW development environment.

 

DeepLTK was completely developed inside LabVIEW which makes it unique in the market, and greatly simplifies the process of integrating machine learning technologies.

FEATURES & FUNCTIONALITY

Build
Configure
Train
Visualise
Deploy
deep neural networl visualization
deep neural network building
deep neural network configuration
dee neural network training
deep neural network deployment

Deep Neural Networks
in LabVIEW

FEATURE HIGHLIGHTS:
  • ​Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW

  • Accelerate training and deployment of DNNs on GPUs

  • Save trained networks and load for deployment

  • Visualize network topology and common metrics (memory footprint, computational complexity)

  • Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference

  • Speed up pre-trained networks by employing network graph optimization utilities

  • Analyze and evaluate network’s performance

  • Start with ready-to-run real-world examples

  • Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)

SUPPORTED LAYERS
 

The toolkit supports a number of layers required to implement deep neural network architectures for common machine learning applications such as image classification, object detection, instance segmentation and voice recognition:

  • Input (1D, 3D)

  • Data Augmentation

  • Convolutional

  • Fully Connected or Dense

  • Batch Normalization (1D, 3D)

  • Pool (maximum, average)

  • Upsampling

  • ShortCut

  • Concatenation

  • Dropout (1D, 3D)

  • SoftMax

  • Object Detection

SUPPORTED NETWORK ARCHITECTURES

  • MLP - Multilayer Perceptron

  • CNN - Convolutional Neural Networks

  • FCN - Fully Convolutional Network

  • ResNet - Deep Residual Learning for Image Recognition

  • YOLO v2 - You Only Look Once for object detection

  • U-Net -  Semantic Segmentation

REFERENCE EXAMPLES

Reference examples are part of the toolkit which can be found with the following path:

LabVIEW install path\examples\Ngene\Deep Learning Toolkit

  • MNIST_Classifier_MLP(Train_1D).vi and MNIST_Classifier_MLP(Train_3D).vi - demonstrates the process of programmatically building and training deep neural networks for image classification task of handwritten digit recognition (based on MNIST dataset) by using MLP (Multilayer Perceptron) architecture.

  • MNIST_Classifier_CNN(Train).vi - demonstrates the process of programmatically building and training deep neural networks for image classification task of handwritten digit recognition (based on MNIST dataset) by using CNN (Convolutional Neural Network) architecture

  • MNIST_Classifier(Deploy).vi - demonstrates the process of deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.

  • MNIST(RT_Deployment) (project) - demonstrates the deploying pretrained model on NI's Real-Time targets.

  • MNIST_CNN_GPU (project) - demonstrates the process of accelerating training and deployment on GPUs.

  • YOLO_Object_Detection(Cam).vi - demonstrates the process of deploying pretrained network for object detection based on YOLO (You Only Look Once) architecture.

  • YOLO_GPU (project) - demonstrates the process of accelerating YOLO object detection network for deployment on GPUs.

Object_Detection (project) - demonstrates training of neural network for object detection on simple dataset.

INSTALLATION AND SYSTEM REQUIREMENTS

The toolkit comes as a VIPM (VI Package Manager) installer which includes the toolkit itself, documentation and reference examples

DEVELOPMENT SYSTEM REQUIREMENTS

  • LabVIEW 2016 (32-bit and 64-bit) and above (64-bit version of LabVIEW is recommended)

  • Windows 10

HAVE QUESTIONS?

bottom of page