Deep Learning Toolkit Release Notes

DEEP LEARNING TOOLKIT FOR LabVIEW

Release Notes

V4.0.0 
Features
  • General performance improvements.

  • Added support for ShortCut (Residual) layer. Now ResNet architectures can be trained. 

  • Added support for Concatenation layer.

  • Updated layer creation API to obtain layer's reference at creation.

  • Added API to calculate networks predictions over a dataset.

  • Added utility VI for Bounding Box format conversion.

  • Updated dataset data-type (cluster) to include file paths array of data samples.

  • Updated dataset data-type (cluster) to include labels as an array of strings.

  • Added possibility to set custom image dimensions (network's input resolution) when creating network topology from configuration file.

  • Added possibility to set custom mini batch size when creating network from configuration file.

  • Added utility VI to split large datasets into smaller portions (e.g. split training dataset into train and validation).

  • Added API to calculate and render confusion matrix based on networks predictions.

  • Added API to get detections over a batch of input samples.

  • Added API for mAP (mean Average Precision) evaluation for object detection tasks.

  • Added WarmUp feature into Learning Rate update policy.

  • Added API to get weights (values and references) from the layer.

  • Updated CUDA and CUDNN support to versions CUDA 10.1 an CUDNN 7.5.6.

  • Deprecated some configuration parameters in the layer creation API.

  • Updated examples to comply with the latest version of the toolkit.

  • Updated some API VI icons.

  • Changed data-flow wiring in SVG diagram for ShortCut, Concat layers and updated colors.

  • Deprecated Detection layer.

  • Speed up training and inference on GPU.

  • Added dependency requirements checking functionality during toolkit’s installation.

Enhancements

  • Fixed a bug preventing usage of more than 1 anchor boxes.

  • Fixed a bug caused "missing nng32.dll" in 32-bit version of LabVIEW.

  • Fixed a bug causing LabVIEW crash in LabVIEW 2017 and LabVIEW 2018.

  • Fixed bug causing LabVIEW crash when deploying networks with DropOut and/or DropOut3D layers.

  • Fixed bug rarely appearing when training network with LRelu activation for GPU.

  • Other bug fixes.

V3.1.0 
Features
  • Added support for training with batch normalization.

  • Added utility to merge/fuse batch normalization into Conv3D or FC.

  • Added API to PULL/PUSH data from/to GPU.

  • Added utility to check GPU driver installation.

 
Enhancements
  • Fixed issue with asynchronous computations on GPU.

  • Fixed dataset's size element type representation in In3D_Out3D dataset control.

  • Added missing API for dataset datatypes in front panel's function controls.

  • Fixed help links in the API.

V3.0.1 
Features
  • Added support for training networks for object detection.

  • Added VIs for anchor box calculation based on annotated dataset.

  • Added VIs for calculating mAP (Mean Average Precision) for evaluating networks for object detection.

  • Added reference example for object detection.

  • Now when initializing weight number of first layers can be specified.
    Suitable to transfer learning.

  • Added API to Det/Get DVR values (Polymorphic VIs for 1D, 2D, 3D and 4D SGL Arrays).

  • Added new type of dataset memory for object detection.

  • Added UpSample Layer

  • Added support for online deployment license activation

  • Updated help file to reflect the changes

 
Enhancements
  • Fixed GPU memory leakage issue.

V2.0.1 
Features
  • 1.Added support for acceleration on GPUs.

  • Added GPU related examples.

  • Restructured help document.

  • Added instructions for GPU toolkit installation.

  • Added description for new examples.

  • Updated GPU related API descriptions.

 
Enhancements
  • Bug fixes and performance improvements.

 
V1.3.3 
Features
  • Removed Augmentation layer.

  • Added augmentation functionality into Input3D layer.

  • Added training support for datasets with different dimensionality:

    • 1-dimensional input -> 1-dimensional output

    • 3-dimensional input -> 1-dimensional output

    • 3-dimensional input -> 3-dimensional output

  • Added API for checking dataset compliance (i.e. input and output dimensions) with a built network

  • Conv3D now supports DropOut3D as the input layer.

  • MaxPool and AvgPool layers now support non square inputs.

  • Added global MaxPool and AvgPool functionality.

  • YOLO Example. Now detected bounding boxes are provided in more convenient way for further processing.

  • YOLO Example. Now custom labels can be provided to be shown on the display.

 
Enhancements
  • Performance Improvements.

  • Improved Error Handling at SoftMax Layer creation.

  • Fixed metrics calculation for FC layer. Now number of params includes number of biases as well.

  • Improved Error Handling for checking dataset compliance with the built network.

  • Fixed a bug when writing a Region layer into configuration file.

V1.2.0 
Features
  • Added support for deployment on NI’s RT targets

  • Added API to get/set layer weights 

  • Added API to get layer outputs/activations

  • Added API to get next layer

  • Optimized weight initialization process

  • Error Handling: Check input layer type at layer creation

  • Error Handling: Check input dimensions at creating Conv3D layer

  • Error Handling: Check input dimensions at creating Pool layer

  • Error Handling: Check input data dimensions when setting Input3D layer outputs

 
Enhancements
  • Fixed the bug for the case when neural network trained on non-square images

  • Fixed the bug in get_next_layer.vi

  • Added warning at get layer data if not proper type of layer of routed at the input

  • Error Handling: Check input dimensions at creating Conv3D layer

  • Fixed a bug get new minibatch for dataset indexing when set for random sampling.

  • Updated instructions in MNIST training example instructions​

V1.1.0 
Features
  • Added new examples - MNIST_Classifier_CNN(Train).vi, MNIST_Classifier(Deploy).vi.

  • Added deployment licensing functionality.

Enhancements
  • Updated help file.

  • Fixed help file location in the installer.

  • Corrected toolkit Vis’ default values.

  • Fixed bug when creating some layers (SoftMax, DropOut, DropOut3D) from configuration file.

  • Fixed errors generated by NN_Destroy.vi when empty network is provided at the input.

  • Now probability set at creation of Dropout layers will be coerced into (0..1) and warning will be generated.

  • Fixed the issue of propagating warning through the toolkit VIs.

  • Other bug fixes.​

V1.0.3
  • Initial Release.

<< Back to product page

HAVE QUESTIONS?

Ngene logo

+374 (95) 724964

3 Hakob Hakobyan st, Yerevan 0033, Armenia

  • youtube
  • facebook
  • linkedin
  • TWITTER

© 2019 by Ngene. Read Privacy Policy