top of page

DeepLTK - Deep Learning Toolkit for LabVIEW
Release Notes

V6.2.1 
Backward Compatibility

This is a minor update which does not break backward compatibility with v6.x.x versions of the toolkit.

Features

  • Added ReLU6 activation function.

  • Removed the requirement for using Conditional Disable Symbol ("NNG") for enabling GPU acceleration.

  • Removed GPU specific examples.

Bug Fixes
  • Fixed a bug in batch normalization.

  • Fixed a bug in the calculation of the MSE loss.

  • Fixed a bug in mAP metric calculations for object detection.

  • Increased the maximum number of layers in the network from 100 to 500.

  • Other minor bug fixes

Other Changes
  • Updated help file.

  • Added link to examples on GitHub in example instructions

V6.1.1 
Backward Compatibility

This is a major update which does not break backward compatibility with v5.x.x version of the toolkit.

 
Features
  • Added support for Nvidia RTX 3xxx series of GPU by upgrading CUDA libraries.

  • Now CUDA libraries are part of the toolkit installer, which eliminates the need for separate installation of CUDA libraries.

  • Now all augmentation operations are accelerated on GPU, which greatly speeds up the training process when augmentations are enabled.

  • Support for older versions of LabVIEW is deprecated. LabVIEW 2020 and newer are supported starting with this release.

  • Improved DeepLTK library loading time in LabVIEW.

 

Enhancements

  • Improved DeepLTK library loading time in LabVIEW.

V5.1.1 
Backward Compatibility

Important: This is major update of the toolkit, which breaks backward compatibility with previous (pre v5.x.x) versions of the toolkit.

 
Features
  • Redesigned the process for specifying and configuring Loss Function. Now setting and configuring Loss function and configuring the training process are separated. New separate API for setting loss function (NN_Set_Loss.vi) is added.

  • Modified “NN_Train_Params.ctl”.

    • Now loss function related parameters are removed from “NN_Train_Params.ctl”.

    • “Momentum” is replaced with “Beta_1” and “Beta_2” parameters for specifying first and second order momentum coefficients.

    • “Weight_Decay” is replaced with “Weight_Decay(L1)” and “Weight_Decay(L2)” for specifying L1 and L2 weight normalization.

  • “NN_Eval_Test_Error_Loss.vi” is deprecated.  Its functionality is now split between NN_Predict.vi” and “NN_Eval.vi”

  • Added support for Adam optimizer.

  • Added support for Swish and Mish activation functions.

  • Con3D layer is now renamed to Conv2D.

  • Added advanced Conv2D layer (Conv2D_Adv), which supports for:

    • dilation

    • grouped convolution

    • non square kernel window dimensions

    • non-square stride sizes

    • different vertical and horizontal padding sizes

  • Modified Upsample layer configuration control (Upsample_cfg.ctl) to separate vertical and horizontal strides.

  • Added new network performance evaluation API (NN_Eval.vi)

  • Label_Idx is removed from “NN_Predict.vi”. Now classification predictions can be converted to categorical/binary labels with help of “NN_to_Categorical.vi”

  • Added new API for converting floating point predictions from network to categorical/binary (NN_to_Categorical.vi).

  • Added new API for categorical/binary labels to one-hot-encoded format (NN_to_OneHotEncoded.vi).

  • Now MaxPool and AvgPool layers support non square window dimensions.

  • Added new API control for 3D dimension representation (NN_Dims(C,H,W).ctl)

  • Region layer is now renamed to YOLO_v2.

    • Removed loss related configuration parameters (moved to NN_Set_Loss.vi configuration control).

    • Now anchor dimensions in YOLO_v2 layer should be provided in relative (to input image dimensions) format.

    • YOLO_v2 layer can automatically create last/preceding Conv2D layer, to match required number of classes and number of anchors.

  • Added support for Channel-Wise Cross-Entropy loss function for 3D output type networks with channel wise SoftMax output layer.

  • Added "Train?" control to “NN_Forward.vi” to take into account whether the network is in train state or not.

  • “NN_Calc_Confusion_Matrix.vi” is converted to polymorphic VI, which instance is chosen based on dataset provided at the input.

  • Optimized “NN_Draw_Rectangle.vi” for speed.

  • Increased Confusion Matrix table display precision from 3 to 4 digits.

  • Updated reference examples to make them compatible with latest changes.

  • Now DeepLTK supports CUDA v10.2 and CUDNN v7.5.x.

  • Configuration file format is updated to address feature changes.

  • Help file renamed to “DeepLTK_Help.chm”

  • Help file updated to represent recent changes.

Enhancements

  • Fixed a bug when MaxPool and AvgPool layers were incorrectly calculating output values on the edges.

  • Fixed a bug related to deployment licensing on RT targets.

  • Fixed a bug where receptive field calculation algorithm did not take into account dilation factor.

  • Corrected accuracy metrics calculation in “NN_Eval(In3D_OutBBox).vi”.

  • Fixed typos in API VI descriptions and control/indicators.

  • Fix incorrect receptive field calculation for networks containing upsampling layer/s.

  • Fixed incorrect texts in error messages

V4.0.0 
Features
  • General performance improvements.

  • Added support for ShortCut (Residual) layer. Now ResNet architectures can be trained. 

  • Added support for Concatenation layer.

  • Updated layer creation API to obtain layer's reference at creation.

  • Added API to calculate networks predictions over a dataset.

  • Added utility VI for Bounding Box format conversion.

  • Updated dataset data-type (cluster) to include file paths array of data samples.

  • Updated dataset data-type (cluster) to include labels as an array of strings.

  • Added possibility to set custom image dimensions (network's input resolution) when creating network topology from configuration file.

  • Added possibility to set custom mini batch size when creating network from configuration file.

  • Added utility VI to split large datasets into smaller portions (e.g. split training dataset into train and validation).

  • Added API to calculate and render confusion matrix based on networks predictions.

  • Added API to get detections over a batch of input samples.

  • Added API for mAP (mean Average Precision) evaluation for object detection tasks.

  • Added WarmUp feature into Learning Rate update policy.

  • Added API to get weights (values and references) from the layer.

  • Updated CUDA and CUDNN support to versions CUDA 10.1 an CUDNN 7.5.6.

  • Deprecated some configuration parameters in the layer creation API.

  • Updated examples to comply with the latest version of the toolkit.

  • Updated some API VI icons.

  • Changed data-flow wiring in SVG diagram for ShortCut, Concat layers and updated colors.

  • Deprecated Detection layer.

  • Speed up training and inference on GPU.

  • Added dependency requirements checking functionality during toolkit’s installation.

Enhancements

  • Fixed a bug preventing usage of more than 1 anchor boxes.

  • Fixed a bug caused "missing nng32.dll" in 32-bit version of LabVIEW.

  • Fixed a bug causing LabVIEW crash in LabVIEW 2017 and LabVIEW 2018.

  • Fixed bug causing LabVIEW crash when deploying networks with DropOut and/or DropOut3D layers.

  • Fixed bug rarely appearing when training network with LRelu activation for GPU.

  • Other bug fixes.

V3.1.0 
Features
  • Added support for training with batch normalization.

  • Added utility to merge/fuse batch normalization into Conv3D or FC.

  • Added API to PULL/PUSH data from/to GPU.

  • Added utility to check GPU driver installation.

 
Enhancements
  • Fixed issue with asynchronous computations on GPU.

  • Fixed dataset's size element type representation in In3D_Out3D dataset control.

  • Added missing API for dataset datatypes in front panel's function controls.

  • Fixed help links in the API.

V3.0.1 
Features
  • Added support for training networks for object detection.

  • Added VIs for anchor box calculation based on annotated dataset.

  • Added VIs for calculating mAP (Mean Average Precision) for evaluating networks for object detection.

  • Added reference example for object detection.

  • Now when initializing weight number of first layers can be specified.
    Suitable to transfer learning.

  • Added API to Det/Get DVR values (Polymorphic VIs for 1D, 2D, 3D and 4D SGL Arrays).

  • Added new type of dataset memory for object detection.

  • Added UpSample Layer

  • Added support for online deployment license activation

  • Updated help file to reflect the changes

 
Enhancements
  • Fixed GPU memory leakage issue.

V2.0.1 
Features
  • 1.Added support for acceleration on GPUs.

  • Added GPU related examples.

  • Restructured help document.

  • Added instructions for GPU toolkit installation.

  • Added description for new examples.

  • Updated GPU related API descriptions.

 
Enhancements
  • Bug fixes and performance improvements.

 
V1.3.3 
Features
  • Removed Augmentation layer.

  • Added augmentation functionality into Input3D layer.

  • Added training support for datasets with different dimensionality:

    • 1-dimensional input -> 1-dimensional output

    • 3-dimensional input -> 1-dimensional output

    • 3-dimensional input -> 3-dimensional output

  • Added API for checking dataset compliance (i.e. input and output dimensions) with a built network

  • Conv3D now supports DropOut3D as the input layer.

  • MaxPool and AvgPool layers now support non square inputs.

  • Added global MaxPool and AvgPool functionality.

  • YOLO Example. Now detected bounding boxes are provided in more convenient way for further processing.

  • YOLO Example. Now custom labels can be provided to be shown on the display.

 
Enhancements
  • Performance Improvements.

  • Improved Error Handling at SoftMax Layer creation.

  • Fixed metrics calculation for FC layer. Now number of params includes number of biases as well.

  • Improved Error Handling for checking dataset compliance with the built network.

  • Fixed a bug when writing a Region layer into configuration file.

V1.2.0 
Features
  • Added support for deployment on NI’s RT targets

  • Added API to get/set layer weights 

  • Added API to get layer outputs/activations

  • Added API to get next layer

  • Optimized weight initialization process

  • Error Handling: Check input layer type at layer creation

  • Error Handling: Check input dimensions at creating Conv3D layer

  • Error Handling: Check input dimensions at creating Pool layer

  • Error Handling: Check input data dimensions when setting Input3D layer outputs

 
Enhancements
  • Fixed the bug for the case when neural network trained on non-square images

  • Fixed the bug in get_next_layer.vi

  • Added warning at get layer data if not proper type of layer of routed at the input

  • Error Handling: Check input dimensions at creating Conv3D layer

  • Fixed a bug get new minibatch for dataset indexing when set for random sampling.

  • Updated instructions in MNIST training example instructions​

V1.1.0 
Features
  • Added new examples - MNIST_Classifier_CNN(Train).vi, MNIST_Classifier(Deploy).vi.

  • Added deployment licensing functionality.

Enhancements
  • Updated help file.

  • Fixed help file location in the installer.

  • Corrected toolkit Vis’ default values.

  • Fixed bug when creating some layers (SoftMax, DropOut, DropOut3D) from configuration file.

  • Fixed errors generated by NN_Destroy.vi when empty network is provided at the input.

  • Now probability set at creation of Dropout layers will be coerced into (0..1) and warning will be generated.

  • Fixed the issue of propagating warning through the toolkit VIs.

  • Other bug fixes.​

V1.0.3
  • Initial Release.

<< Back to product page

HAVE QUESTIONS?

bottom of page