Skip to content

NNtrainer is Software Framework for Training Neural Network Models on Devices.

License

Notifications You must be signed in to change notification settings

jihochu/nntrainer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NNtrainer

Code Coverage GitHub repo size GitHub issues GitHub pull requests Coverity Scan Build Status DailyBuild

NNtrainer is a Software Framework for training Neural Network models on devices.

Overview

NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network from the scratch, NNtrainer finetunes the neural network model on device with user data for the personalization.

Even if NNtariner runs on device, it provides full functionalities to train models and also utilizes limited device resources efficiently. NNTrainer is able to train various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression, Reinforcement Learning algorithms, Recurrent network and more. We also provide examples for various tasks such as Few-shot learning, ResNet, VGG, Product Rating and more will be added. All of these were tested on Samsung Galaxy smart phone with Android and PC (Ubuntu 18.04/20.04).

NNTrainer: Towards the on-device learning for personalization , Samsung Software Developer Conference 2021 (Korean)
NNTrainer: Personalize neural networks on devices! , Samsung Developer Conference 2021
NNTrainer: "On-device learning" , Samsung AI Forum 2021

Official Releases

Tizen Ubuntu Android/NDK Build
6.0M2 and later 18.04 9/P
arm armv7l badge Available Ready
arm64 aarch64 badge Available android badge
x64 x64 badge ubuntu badge Ready
x86 x86 badge N/A N/A
Publish Tizen Repo PPA
API C (Official) C/C C/C
  • Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
  • Available: binary packages are released and deployed automatically and periodically along with CI tests.
  • Daily Release
  • SDK Support: Tizen Studio (6.0 M2 )

Maintainer

Reviewers

Components

Supported Layers

This component defines layers which consist of a neural network model. Layers have their own properties to be set.

Keyword Layer Class Name Description
conv1d Conv1DLayer Convolution 1-Dimentional Layer
conv2d Conv2DLayer Convolution 2-Dimentional Layer
pooling2d Pooling2DLayer Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling
flatten FlattenLayer Flatten layer
fully_connected FullyConnectedLayer Fully connected layer
input InputLayer Input Layer. This is not always required.
batch_normalization BatchNormalizationLayer Batch normalization layer
activation ActivaitonLayer Set by layer property
addition AdditionLayer Add input input layers
attention AttentionLayer Attenstion layer
centroid_knn CentroidKNN Centroid K-nearest neighbor layer
concat ConcatLayer Concatenate input layers
multiout MultiOutLayer Multi-Output Layer
backbone_nnstreamer NNStreamerLayer Encapsulate NNStreamer layer
backbone_tflite TfLiteLayer Encapsulate tflite as an layer
permute PermuteLayer Permute layer for transpose
preprocess_flip PreprocessFlipLayer Preprocess random flip layer
preprocess_l2norm PreprocessL2NormLayer Preprocess simple l2norm layer to normalize
preprocess_translate PreprocessTranslateLayer Preprocess translate layer
reshape ReshapeLayer Reshape tensor dimension layer
split SplitLayer Split layer
dropout DropOutLayer Dropout Layer
embedding EmbeddingLayer Embedding Layer
rnn RNNLayer Recurrent Layer
gru GRULayer Gated Recurrent Unit Layer
lstm LSTMLayer Long Short-Term Memory Layer
lstmcell LSTMCellLayer Long Short-Term Memory Cell Layer
time_dist TimeDistLayer Time distributed Layer

Supported Optimizers

NNTrainer Provides

Keyword Optimizer Name Description
sgd Stochastic Gradient Decent -
adam Adaptive Moment Estimation -

Supported Loss Functions

NNTrainer provides

Keyword Class Name Description
cross_sigmoid CrossEntropySigmoidLossLayer Cross entropy sigmoid loss layer
cross_softmax CrossEntropySoftmaxLossLayer Cross entropy softmax loss layer
constant_derivative ConstantDerivativeLossLayer Constant derivative loss layer
mse MSELossLayer Mean square error loss layer

Supported Activation Functions

NNTrainer provides

Keyword Loss Name Description
tanh tanh function set as layer property
sigmoid sigmoid function set as layer property
relu relu function set as layer propery
softmax softmax function set as layer propery
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_param & type
learnig_rate_decay learning rate decay need to set step

Tensor

Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized. Currently, we supports lazy calculation mode to reduce complexity for copying tensors during calculations.

Keyword Description
4D Tensor B, C, H, W
Add/sub/mul/div -
sum, average, argmax -
Dot, Transpose -
normalization, standardization -
save, read -

Others

NNTrainer provides

Keyword Loss Name Description
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_constant & type
learnig_rate_decay learning rate decay need to set step

APIs

Currently, we provide C APIs for Tizen. C APIs are also provided for other platform. Java & C# APIs will be provided soon.

Examples for NNTrainer

A demo application which enable user defined custom shortcut on galaxy watch.

An example to train mnist dataset. It consists two convolution 2d layer, 2 pooling 2d layer, flatten layer and fully connected layer.

A reinforcement learning example with cartpole game. It is using DeepQ algorithm.

Transfer learning examples with for image classification using the Cifar 10 dataset and for OCR. TFlite is used for feature extractor and modify last layer (fully connected layer) of network.

An example to train resnet18 network.

An example to train vgg16 network.

This application contains a simple embedding-based model that predicts ratings given a user and a product.

An example to demonstrate few-shot learning : SimpleShot

An example to demonstrate how to create custom layers, optimizers or other supported objects.

A transfer learning example with for image classification using the Cifar 10 dataset. TFlite is used for feature extractor and compared with KNN.

A logistic regression example using NNTrainer.

Instructions for installing NNTrainer.

Instructions for preparing NNTrainer for execution

Open Source License

The nntrainer is an open source project released under the terms of the Apache License version 2.0.

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

About

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 85.0%
  • Python 7.8%
  • C 3.5%
  • Meson 2.0%
  • Makefile 1.3%
  • Shell 0.4%