Tensorflow2.0 HelloWorld using google colab.

In this article, we use the most popular deep learning framework TensorFlow and we will take a basic hello world example to do this example you no need to set up a local environment on your machine. 

Image result for tensorflow
Tensorflow.org

We are using google Colab If you are not aware of what it is? here you go and check out my article on the same Colab getting started!!
Train deep neural network free using google colaboratory.medium.com

Now visit https://colab.research.google.com/ and you will see 

Brief About Colab:

Once you opened the Colab and if you are already logged in Gmail account. 

The google colab is available with zero configuration and free access to GPU and the best part is it sharable. The Google Collaboration is free service for the developers to try TensorFlow on CPU and GPU over the cloud instance of Google. This service is totally free for improving Python programming skills, developers can log in with their Google Gmail account and connect to this service. Here developers can try deep learning applications using popular machine learning libraries such as Keras, TensorFlow, PyTorch, OpenCV & others.

Sign in to google colab and create a new notebook for our HelloWorld example.

Go to File → New NoteBook(Google sign-in is required) → 

Now new notebook is ready we want to use TF2.0.0 for our example so let us first install TensorFlow 2.0.0 is already released as a production version. For installing TensorFlow2.0.0 run the following command.

!pip install tensorflow==2.0.0

After a successful installation, we can verify the installed version.

import tensorflow as tf
print(tf.__version__)

Helloworld example:

Now everything is ready and looking promising. We have installed TensorFlow and verified versions too. Now let us look at helicopter overview and create a hello world example. 

To change Runtime: Click on Runtime →Change Runtime Type → one popup will open choose perticular runtime and hardware accelrator such as GPU and TPU.

There are a lot of changes that are there in TF1.0 and TF 2.0.0 TF comes with the ease of development less coding it needs in this version of TF2.0.0. TensorFlow 2.0.0 is developed to remove the issues and complexity of previous versions. 

In the TF 2.0 eager execution is enabled by default.

The eager execution mode evaluates the program immediately and without building the graph. The eager execution mode operation returns the concrete value instead of constructing a computational graph and then execute the program.

We will use the same Hello world code from tensorflow 1.x version for this and let us observe the output.

#This code snippet is from tensorflow 1.X version
import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#session
sess = tf.Session()

#print the message
print(sess.run(msg))

In this example, we are using Tensorflow 1.X.X version code to print the message, but Session has been removed in TF2.0.0 this will cause the exception i.e

AttributeError: module 'tensorflow' has no attribute 'Session'

We will use the same above code snippet by removing the Session

import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#print the message
print(msg)

#print using tf.print()
tf.print(msg)

Here we have two print statement observe output for both print:

  1. tf.Tensor(b’Hello and welcome to Tensorflow world’, shape=(), dtype=string) 
  2. Hello and welcome to Tensorflow world.

This is it, for now, we will start exploring different API of TF in the next article.

Code: 

Code is available over github you can directly import that in colab and run it.

https://github.com/maheshwarLigade/GoogleColab/blob/master/HelloWorldTF2_0.ipynb

More Articles on Tensorflows:

https://medium.com/analytics-vidhya/optimization-techniques-tflite-5f6d9ae676d5

https://medium.com/analytics-vidhya/tensorflow-lite-converter-dl-example-febe804b8673

https://medium.com/techwasti/tensorflow-lite-machine-learning-at-the-edge-26e8421ae661

https://medium.com/techwasti/dynamic-computation-graphs-dcg-with-tensorflow-fold-33638b2d5754

https://medium.com/techwasti/tensorflow-lite-deployment-523eec79c017

Tensorflow Lite model inferencing fast and lean!!

This article is intended to talk more about how TFLite achieves inference over all the different types of edge devices in a fast and lean way


We have a different set of edge devices such as IoT devices, mobile devices, embedded devices, etc. How TFLite is taking inference seamlessly and elegant way. To understand this let us jump into it.

What is an interpreter?

As we know TFLite consists of a set of tools and the TFLite consist of two core components:

  1. Converter
  2. Interpreter

The converter will help us to convert deep learning models into the TFLite format and the interpreter makes our life easier while inferencing.

The TensorFlow Lite interpreter, which runs specially optimized models on many different hardware types, including mobile phones, embedded Linux devices, and microcontrollers

TFLite interpreter people refer to interchangeably as inferencing. The term inference refers to the process of executing a TensorFlow Lite model on edge devices in order to make predictions based on user input. To perform inference with a tensorflow lite model, you must run it through interpreter. 

TFLite interpreter is designed to be lean and fast to achieve this it uses a static graph ordering and a custom memory allocator to ensure minimal load, initialization, and execution latency.

Step of inferencing:

TensorFlow inference APIs are provided for most common mobile/embedded platforms such as Android, iOS, & Linux, in multiple programming languages. Across all libraries, the TensorFlow Lite API enables you to load models, feed inputs, and retrieve inference outputs.

TFLite interpreter follows below steps in general:

  1. Loading a model:- 

 The first and more must step is to load the .tflite model into the memory, which contains the execution graph.

2. Transforming data:- 

 The model doesn’t understand the raw input data. To make raw compatible into a model understandable format you need to transform the data. For e.g for the computer vision model, you need to resize the input image and then provide that image to model.

3. Running inference:- 

Now the model is in memory and data is in the required format let us take the inference. It involves a few steps such as building the interpreter and allocating tensors.

4. Interpreting output:-

After the third step, we will get some output after inference but the end-user won’t understand that. Model results most of the time are probabilities or approximate value. We have interpreted this result into meaningful output. 


Example:-

Let us take model inferencing using python 

import numpy as np
import tensorflow as tf

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

Example in C++, even though language will change or underlining platform will change steps are the same:

// Load the model
std::unique_ptr<tflite::FlatBufferModel> model =
    tflite::FlatBufferModel::BuildFromFile(filename);

// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);

// Resize input tensors, if desired.
interpreter->AllocateTensors();

float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.

interpreter->Invoke();
//output data
float* output = interpreter->typed_output_tensor<float>(0);

Conclusion:- 

In this article, we explored the TFLite interpreter and what are the steps involved in TFLite inferencing and how to do that.

Reference:-

https://www.tensorflow.org/lite/guide/inference

Fast Inference: TFLite GPU Delegate!!

Running inference over the edge devices, especially on mobile devices is very demanding. When you have a really big machine learning model taking inference with the limited resources is a very crucial task. 

Many mobile devices especially mobile devices have hardware accelerators such as GPU. Tensorflow Lite Delegate is useful to optimize our trained model and leveraged the benefits of hardware acceleration.

What is Tensorflow Lite Delegate?

Delegator’s job, in general, is to delegate or transfer your work to someone. TensorFlow Lite supports several hardware accelerators.

A TensorFlow Lite delegate is a way to delegate part or all of graph execution to another executor.

Why should you use delegates?

Running inference on compute-heavy deep learning models on edge devices is resource-demanding due to the mobile devices’ limited processing, memory, and power. Instead of relying on the device CPU, some devices have hardware accelerators, such as GPU or DSP(Digital Signal Processing), that allows for better performance and higher energy efficiency.

How TFLite Delegate work?

How TFLite Delegate work. tensorflow.org

Let us consider the graph on the left side. It has an input node where we will get input for inference. We will get input node going through convolutional operation and then mean operation and it uses the output of these two operations to compute the SquareDifference. 

Let us assume we have a hardware accelerator that can perform Conv2d and mean operations very fastly and efficiently and above graph will be like this:

In this case, we will delegate conv2d and mean these two operations to a specialized hardware accelerator using the TFLite delegator. 

TFLite GPU delegator will delegate the operations to a GPU delegator if available.

TFLite allows us to provide delegates for specific operations, in which case the graph will split into multiple subgraphs, where each subgraph handled by a delegate. Each and every subgraph that is handled by a delegate will be replaced with a node that evaluates the subgraph on its invoked call. Depending on the model, the final graph can end up with one node or many nodes, which means that all of the graphs were delegated or multiple nodes handled the subgraphs. In general, you don’t want to have multiple subgraphs handled by the delegate, since each time you switch from delegate to the main graph, there is an overhead for passing the results from the subgraph to the main graph. 

It’s not always safe to share memory.

How to add a delegate?

  1. Define a kernel node that is responsible for evaluating the delegate subgraph.
  2. Create an instance of TfLiteDelegate, which will register the kernel and claim the nodes that the delegate can execute.

Android:

Tensorflow has provided a demo app for android:

In your application, add the AAR as above, import org.tensorflow.lite.gpu.GpuDelegate module, and use theaddDelegate function to register the GPU delegate to the interpreter

import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;

// Initialize interpreter with GPU delegate
GpuDelegate delegate = new GpuDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
Interpreter interpreter = new Interpreter(model, options);

// Run inference
while (true) {
  writeToInput(input);
  interpreter.run(input, output);
  readFromOutput(output);
}

// Clean up
delegate.close();

iOS:

Include the GPU delegate header and call the Interpreter::ModifyGraphWithDelegate function to register the GPU delegate to the interpreter:

#import "tensorflow/lite/delegates/gpu/metal_delegate.h"

// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr);  // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;

// Run inference
while (true) {
  WriteToInputTensor(interpreter->typed_input_tensor<float>(0));
  if (interpreter->Invoke() != kTfLiteOk) return false;
  ReadFromOutputTensor(interpreter->typed_output_tensor<float>(0));
}

// Clean up
interpreter = nullptr;
DeleteGpuDelegate(delegate);

Note:-

Some operations that are trivial on the CPU may have a high cost for the GPU.

Reference Link:

https://www.tensorflow.org/lite/performance/gpu

For more such stories

Optimization techniques – TFLite!!

One of the most popular Optimization technique is called quantization.


Running the machine learning model and making inference on mobile devices or embedded devices comes with certain challenges such as the limited amount of resources such as memory, power and data storage, so it’s crucial and critical to deploy ML model on edge devices. 

It’s critical to deploy optimized machine learning models on mobile and embedded devices so that they can run efficiently. There are optimization techniques and one of the optimization techniques is Quantization. In the last article, we have seen how to use the TFLite Converter to optimize the model for edge devices without any modification in weights and activation types.


What is Quantization?

Quantization is generally used in mathematics and digital signal processing. Below is the wiki definition.

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes.

Quantization refers to the process of reducing the number of bits that represent a number. In the context of deep learning, the dominant numerical format used for research and for deployment has so far been a 32-bit floating-point or FP32. Convert FP32 weights and output activations into the nearest 8-bit integer, some times 4/2/1 bit as well in quantization.

Quantization optimizes the model by quantizing the weights and activation type. TFLite uses quantization technique to speed up inference over the edge devices. TFLite converter is the answer to whether we can manage a deep learning model with lower precision. Now you know exactly quantization, let us, deep dive:

Quantization dramatically reduces both the memory requirement and computational cost of using neural networks.

The quantizing deep learning model uses techniques that allow for reduced precision representations of weights and, optionally, activations for both storage and computation.

TFLite provides several level of support to quantization.

  1. Post-training quantization
  2. Quantization aware training.

Below is a table that shows the benefits of model quantization for some CNN models. 

Benefits of model quantization for select CNN models. tensorflow.org

Post-training quantization:

As the name implies its post-training technique, this is after your model is trained. Post-training quantization is a technique used to quantizing weights and activation types. This technique can reduce the model size and also improving CPU and hardware acceleration latency. There are different optimization options such as weight, full integer, etc based on our requirement we can choose. 

TensorFlow org provided a decision tree that can help us in making decision

tensorflow.org

Weight Quantization:

The very simple post-training quantization is quantizing only weights from FP to 8 bit precision. This option is available with TFLite converter. At inference, weights are converted from 8-bits of precision to floating-point and computed using floating-point kernels. This conversion is done once and cached to reduce latency. If you want to improve latency further use of a hybrid operator. 

import tensorflow as tfconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]tflite_quant_model = converter.convert()

At the time of conversion, set the optimizations flag to optimize for model size.

This optimization provides latencies close to fully fixed-point inference. but, the outputs are still stored using floating-point.

Full integer quantization:

We can get further latency improvements, reductions in peak memory usage, and access to an integer only hardware accelerators by making sure all model math is quantized. In full integer quantization, you need to measure the dynamic range of activations and inputs by supplying data sets, create a dataset using an input data generator.

import tensorflow as tfdef representative_dataset_gen():  for _ in range(num_calibration_steps):    yield [input]
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.representative_dataset = representative_dataset_gentflite_quant_model = converter.convert()

The result of full integer quantization should be full quantized, any ops don’t have quantized implementation left in FP. Full integer-only execution gets a model with even faster latency, smaller size, and integer-only accelerators compatible model.

you can enforce full integer quantization for all ops and use integer input and output by adding the following lines before you convert.

The converter throw an error if it encounters an operation it cannot currently quantize.

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8converter.inference_output_type = tf.uint8

Float 16 Quantization example:

The IEEE standard for 16-bit floating-point numbers. We can reduce the size of a floating-point model by quantizing the weights to float16. This technique reduces the model size by half with minimal loss of accuracy as compared to other techniques. This technique model will “dequantize” the weights values to float32 when running on the CPU.

import tensorflow as tfconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]tflite_quant_model = converter.convert()

We have seen a different technique in post-training quantization: The float 16 quantization may not be a good choice if you need maximum performance. A Full integer quantization to fixed-point math would be better in that case. Weight quantization is a very basic quantization. Since weights are quantized post-training, there could be an accuracy loss, particularly for smaller networks.

Tensorflow Lite model accuracy

Quantization aware Training:

There could be an accuracy loss in a post-training model quantization and to avoid this and if you don’t want to compromise the model accuracy do quantization aware training. As we have learned the post-training quantization technique is after the model has been trained. To overcome post-training quantization technique drawbacks we have quantization aware model training. This technique ensures that the forward pass matches precision for both training and inference. In this technique Tensorflow created flow, wherein the process of constructing the graph you can insert fake nodes in each layer, to simulate the effect of quantization in the forward and backward passes and to learn ranges in the training process, for each layer separately.

There are two aspects of this technique

  • Operator fusion at inference time is accurately modeled at training time.
  • Quantization effects at inference are modeled at training time.
tf.quantization.quantize(    input,    min_range,    max_range,    T,    mode='MIN_COMBINED',    round_mode='HALF_AWAY_FROM_ZERO',    name=None)
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)if T == qint8: out[i] -= (range(T) + 1) / 2.0
num_discrete_values = 1 << (# of bits in T)range_adjust = num_discrete_values / (num_discrete_values - 1)range = (range_max - range_min) * range_adjustrange_scale = num_discrete_values / rangequantized = round(input * range_scale) - round(range_min * range_scale) +  numeric_limits<T>::min()quantized = max(quantized, numeric_limits<T>::min())quantized = min(quantized, numeric_limits<T>::max())

Check the complete example here:

References:-

https://www.tensorflow.org/lite/convert/quantization

https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize

Tensorflow Lite Converter Example!!

Let us deploy Deep learning TensorFlow model on edge devices using TF Lite. 

There are three different ways we can use TensorFlow lite converter

  1. Convert TF SaveModel to TF Lite 
  2. Convert Keras PreBuilt Model to TF Lite
  3. Concrete Function to TF Lite
  1. Convert TF SaveModel to TF Lite:- 

Let us create a simple model using TensorFlow and save that model using the TF SaveModel. To develop this model we will use TensorFlow API. In this example, we will show how to convert SaveModel into TF Lite FlatBuffer.

# we will train 
import tensorflow as tf# Construct a basic TF model.root = tf.train.Checkpoint()root.v1 = tf.Variable(3.)root.v2 = tf.Variable(2.)root.f = tf.function(lambda x: root.v1 * root.v2 * x)
# Save the model into temp directoryexport_dir = "/tmp/test_saved_model"input_data = tf.constant(1., shape=[1, 1])to_save = root.f.get_concrete_function(input_data)tf.saved_model.save(root, export_dir, to_save)
# Convert the model into TF Lite.converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)tflite_model = converter.convert()
#save model 
tflite_model_files = pathlib.Path(‘/tmp/save_model_tflite.tflite’)
tflite_model_file.write_bytes(tflite_model)

2. Convert Keras PreBuilt Model to TF Lite:-

In this section, we have explored how to convert the prebuilt Keras model into the TF lite model. We will run inference on a pre-trained tf.keras MobileNet model to TensorFlow Lite.

import numpy as npimport tensorflow as tf
# Load the MobileNet keras model.# we will create tf.keras model by loading pretrained model on #imagenet dataset
model = tf.keras.applications.MobileNetV2(    weights="imagenet", input_shape=(224, 224, 3))
# here we pretrained model no need use SaveModel 
# here we will pass model directly to TFLiteConverter
converter = tf.lite.TFLiteConverter.from_keras_model(model)tflite_model = converter.convert()

#if you want to save the TF Lite model use below steps or else skip
tflite_model_files = pathlib.Path(‘/tmp/pretrainedmodel.tflite’)
tflite_model_file.write_bytes(tflite_model)
# Load TFLite model using interpreter and allocate tensors.interpreter = tf.lite.Interpreter(model_content=tflite_model)interpreter.allocate_tensors()

3. Concrete Function to TF Lite:- 

In order to convert TensorFlow 2.0 models to TensorFlow Lite, the model needs to be exported as a concrete function. If you have developed your model using TF 2.0 then this is for you. We will convert concrete function into the TF Lite model. In this section also we will use the Keras MobileNet model.

import tensorflow as tf
# load mobilenet model of keras 
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape=(224, 224, 3))

We will tf.function to create a callable tensorflow graph of our model.

#get callable graph from model. 
run_model = tf.function(lambda x: model(x))
# to get the concrete function from callable graph 
concrete_funct = run_model.get_concrete_function(tf.Tensorpec(model.inputs[0].shape, model.inputs[0].dtype))

#convert concrete function into TF Lite model using TFLiteConverter
converter =  tf.lite.TFLiteConverter.from_concrete_functions([concrete_funct])
tflite_model = converte.convert()
#save model 
tflite_model_files = pathlib.Path(‘/tmp/concretefunc_model.tflite’)
tflite_model_file.write_bytes(tflite_model)

CLI TF Lite Converter:-

Apart from this python API we can also use Command Line Interface to convert model. TF lite converter to convert SaveModel to the TFLite model.

The TensorFlow Lite Converter has a command-line tool tflite_convert which supports basic models.

#! /usr/bin/env/  bash
tflite_convert = --saved_model_dir=/tmp/mobilenet_saved_model \
--output_file=/tmp/mobilenet.tflite

 --output_file. Type: string. Specifies the full path of the output file.

--saved_model_dir. Type: string. Specifies the full path to the directory containing the SavedModel generated in 1.X or 2.X.

 --keras_model_file. Type: string. Specifies the full path of the HDF5 file containing the tf.keras model generated in 1.X or 2.X.

#! /usr/bin/env/  bash
tflite_convert = --keras_model_file=model.h5 \
--output_file=/tmp/mobilenet_keras.tflite

The converter supports SavedModel directories, tf.keras models, and concrete functions.

For now, we will end off with these options only. Next article we will explore converting RNN model and Quantized Models.

Tensorflow Lite Model Deployment!

Here you go — — Introduction Story of Tensorflow Lite

In the above article, we introduced TensorFlow lite. What is TensorFlow lite and what is the purpose of it and what is TensorFlow lite is not.

In this article, we will dig deeper and steps involved in the TensorFlow lite model deployment. 

The above diagram states the deployment flow of Tensorflow lite mode at the edge devices.

Let us go through the steps from the top of the diagram.

Very high level convert this diagram into two functionality first step is converter and second, is the interpreter or inference the model.

  1. Train Model:- 

Train your model using TensorFlow. We can train our model using any high-level TensorFlow API such as Keras or either you have a legacy TensorFlow model. You can train our model using high-level API like Keras or low-level API. You can develop your own model or use TensorFlow inbuilt model. 

If you have any other model also you can convert those models into TensorFlow using ONNX and use it. Once the model is ready you have save that model. We can save our model in a different format based on APIs such as HDF5, SavedModel or FrozenGraphDef.

2. Convert Model:- 

In this step, we are actually using the Tensorflow Lite converter to convert the TensorFlow model into the TensorFlow lite flatbuffer format.

FlatBuffers is a special data serialization format that is optimized for performance. Tensorflow Lite flatbuffer aka TF Lite model. The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite FlatBuffer file (.tflite). The converter supports SavedModel directories, tf.keras models, and concrete functions. Now our TFLite model is ready.

You can convert a model using the Python API or command-line tool. CLI support very basic models.

Python API example:- 

//export_dir is the path of your TF model is saved.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)tflite_model = converter.convert()

CLI example 

bazel run //tensorflow/lite/python:tflite_convert -- \  --saved_model_dir=/tmp/mobilenet_saved_model \  --output_file=/tmp/mobilenet.tflite

3. Deploy Model:-

Now our model is ready and we have ‘.tflite’ file. We can deploy this to IoT devices, embedded devices or mobile devices. We can 

4. Deploy model:-

To perform inference with a TensorFlow Lite model, you must run it through an interpreter. TensorFlow Lite model serves on a device using an interpreter. TensorFlow Lite interpreter provides a wide range of interfaces and supports a wide range of devices. The TensorFlow Lite interpreter is designed to be lean and fast. We can run models locally on these devices using the Tensorflow Lite interpreter. Once this model gets loaded into devices such as embedded devices, Android or iOS devices. Once a device is deployed then take inference. 

The inferencing model goes through the below steps in generally. 

a. Loading a model:- You must load .tflite model file into memory.

b. Transforming data:- Raw input data for the model generally does not much input data format expected by the model. You need to transform the data.

c. Running inference:- Execute inference over transformed data.

d. Interpreting output:- When you receive results from the model inference, you must interpret the tensors in a meaningful way that’s useful in your application.

PreTrained Models

https://www.tensorflow.org/lite/models

https://www.tensorflow.org/lite/guide/roadmap

Some Examples

https://www.tensorflow.org/lite/examples

Tensorflow Lite- machine learning at the edge!!

Tensorflow created a buzz in AI and deep learning forum and TensorFlow is the most popular framework in the deep learning community. 

tensorflow.org

Introduction:- 

As we know that to train deep learning models we need to compute power and this age of computation. Now we are moving with cloud computing along with edge computing. Edge computing is the need of today’s world because of innovation in the IoT domain and due to compliance and data protection laws enforcing companies to do computation and the edge side instead of computing model in the cloud and sending the result back to a client device is now the legacy.

As TensorFlow is the most popular deep-learning framework. It comes with its lite weight version for edge computation. Now a day’s mobile devices have good processing power but edge devices have less power.

Train deep learning model in less than 100KB.

The official definition of Tensorflow Lite:

“TensorFlow Lite is an open-source deep learning framework for on-device inference.”

Deploy machine learning models on mobile and IoT devices.

Tensorflow Lite is package of tools to help developers to run TensorFlow models on mobile, embedded devices, and IoT devices. It enables on-device machine learning inference with low latency and a small binary size.

Tensorflow Lite is providing machine learning at the edge devices.

Edge computing means compute at local.

Deep Dive:-

This diagram illustrates the standard flow for deploying the model using TensorFlow Lite.

Deploying model using TensorFlow Lite at the edge devices

Tensorflow Lite is not a separate deep learning framework, it is providing a set of tools that will help developers run TensorFlow models or any other deep learning models on mobile, embedded and IoT devices.

Steps:-

  1. Choose Model or develop your own model.
  2. Choose Model
  3. Convert the Model
  4. Deploy the Model
  5. Run the inference with the Model
  6. Optimize the Model and repeat the above steps.

Tensorflow Lite consists of two main components

  1. Converter:- Tensorflow Lite Converter converts the TensorFlow model into the TensorFlow lite model.
  2. Interpreter:- It is supporting a set of core operators that are optimized for on-device applications and with a small binary size. It is basically for inferencing the model.

Why Edge Computing?

Edge computing is really best to use case along with cloud computing. Nowadays cloud computing becomes crazy but there are a certain requirement where edge computation will beat cloud computing. Why edge computation is more important and what is advantage you will derive from this.

  1. Privacy:- No data needs to leave the device. Everything is there only.
  2. Latency:- There’s no back and forth request to a server.
  3. Connectivity:- Internet connection not required
  4. Power Consumption:- Connecting to a network requires power.

Tensorflow Lite is the one-stop solution to convert your deep learning model and deploy efficiently and enjoy inferencing. TensorFlow lite supports both mobile devices and microcontrollers. 

Dynamic Computation Graphs(DCG) with Tensorflow Fold!!

Dynamic Computation Graphs(DCG) with Tensorflow Fold!!

Google has introduced a new tool under TensorFlow umbrella i.e TensorFlow Fold.

If you are familiar with the deep learning libraries such as TensorFlow, chainer, theano, caffee and many more. Everyone has a unique approach to building the graph-based computation. But some How almost all machine learning/deep learning frameworks operate on static computation graphs and can’t handle dynamic computation graphs. (PyTorch, Dynet, and Chainer are exceptions).

Tensorflow fold is based on deep Learning with Dynamic Computation Graphs. What an idea!!!

Ref from research.googleblog.com

Why tensorflow fold?

As we already have one beautiful tool suite case tensorflow which is addressing some cool problem. But it has some limitation in terms of dynamic graph computation. Tensorflow uses static graph computation. Batch processing of dynamic graphs is a very common technique for a variety of applications, such as computer vision and natural language processing. However, due to the varieties of type and shapes between distinct data, batch processing with a static graph over such data set is almost impossible with a current tensorflow framework.

Tensorflow fold is not another deep-learning framework. This is the extension to tensorflow that provides a tensorFlow implementation of the dynamic batching algorithm. Dynamic batching is an execution strategy for dynamic computation graphs.

Computations over data-flow graphs is a popular trend for deep learning with neural networks, especially in the field of cheminformatics and understanding natural language. In most frameworks, such as TensorFlow, the graphs are static, which means the batch processing is only available for a set of data with the same type and shape. However, in most original data sets, each data has its own type or shape, which leads to a problem because the neural networks cannot batch these data with a static graph.

To overcome the above problem tensorflow fold has introduced.

Getting started!!!

Fold runs under linux; Python 2.7 and Python3.3 are recommended. Install either using virtualenv or pip.

Please note that Fold requires TensorFlow 1.0; it is not compatible with earlier versions due to breaking API changes.

First install Python, pip, and Virtualenv:

sudo apt-get install python-pip python-dev python-virtualenv
#create virtualenv
virtualenv foo             # for Python 2.7
virtualenv -p python3 foo  # for Python 3.3+
#Activate environment
source ./foo/bin/activate      # if using bash
source ./foo/bin/activate.csh  # if using csh
#  Install the pip package for TensorFlow. For Python 2.7 CPU-only, this will be:
pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc0-cp27-none-linux_x86_64.whl
#For Python 3.3+ and/or GPU, see here for the full list of available TF binaries.
#Check that TensorFlow can load:
python -c 'import tensorflow'
#  Now install tensoflow fold
#Install the pip package for Fold. For Python 2.7, this will be:
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-cp27-none-linux_x86_64.whl
#for python 3.3
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-py3-none-linux_x86_64.whl
#Test is installed successfully or not
python -c 'import tensorflow_fold'

If everything goes well. then test below example.

Next one

  1. Quickstart notebook
  2. Tensorflow fold Documentation
  3. TensorFlow: Concepts, Tools, and Techniques

There are other libraries and framework which are also supporting dynamic graph computation. Tensorflow fold is tensorflow based and it has its own approach to tackle this problem.

In this paper, Google introduced a new algorithm called ‘Dynamic Batching’, and developed a Tensorflow-based library called ‘TensorFlow Fold’, which solved the DCGs problem in both theoretical and empirical fields.
This is the experimental implementations, they proved that their method is effective and more efficient and concise than previous works.

Paper is here for more details.

Moral of the story is tensorflow is not only supporting tensors any more!!!!

Let us apply thoughts and let me know your experience.

By clapping more or less, you can signal to us which stories really stand out.

If you enjoyed this article, please don’t forget to Clap.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.