Tensor Flow in Android
How smart is my Android App…?
In recent years answer to this question has changed drastically. Google has a big hand in contributing to this field ,one of the most significant and worth mentioning work from google is the Tensor flow you could talk to Google app through the noise of a city sidewalk, or read a sign in Russian using Google Translate, or instantly find pictures of your Labradoodle in Google Photos. We use Tensor Flow for everything from speech recognition in the Google app, to Smart reply in Inbox, to search in Google Photos. It allows us to build and train neural nets up to five times faster than our first-generation systems, so we can use it to improve our products much more quickly.
Tensor Flow is an open-source software library which was first introduced by Google in November 2015. It is used in Android to implement Machine Learning, and this library is mainly used for Machine Intelligence.The core of the Tensor Flow library is built on C++, but the programmers can write the Tensor Flow software either in C++ or Python.The Tensor Flow library was initially designed for internal use, but later, its features were used in its live products. It is one of the best alternatives to Google Cloud Vision API. And, there is no need for an internet connection too; we can even work offline with Tensor Flow. It is an easy and fast way to classify and detect objects from an image directly by using one’s mobile device’s camera.
Essential Parts for Building Tensor Flow in Android
- If we want to build Tensor Flow in Android, we will have to use JNI (Java Native Interface) to call C++ functions like loadModel, getPredictions, etc.
- We also need a jar (Java API) file and a .so (C++ compiled file) file.
- To classify images, we must have a pre-trained model file and a label file for this.
Our App powered with Tensor Flow gets the ability to recognise objects from an image.The app accomplishes this feature using a bundled machine learning model running in Tensor Flow on the device (no network calls to a backend service). The model is trained against millions of images hence it can look at the photos which the camera feeds and classify the object into its best guess (from the 1000 object classifications it knows). Along with its best guess, it shows a confidence score to indicate how sure it is about its guess.
Preparing the TF Model
First, we first create a simple model and save its computation graph as a serialized GraphDef file. After training the model, we then save the values of its variables into a checkpoint file. We have to turn these two files into an optimized standalone file, which is all we need to use inside the Android app.
The two files to be included in our android app are:-
tensorflow_inception_graph.pb- This is our trained machine learning model and where the magic comes from. It’s a pre-built Tensor Flow Graph describing the exact operations needed to compute a classification from input image data. This Graph is serialized and encoded into binary with Google’s Protocol Buffers so it can be deserialized across different platforms.
imagenet_comp_graph_label_strings.txt– this contains the 1000 classifications that the output of the model corresponds to (e.g. “vending machine”, “water bottle”, “coffee mug”). These classifications are defined by the ImageNet Large Scale Visual Recognition Challenge which the model was built to compete in.
Implement Tensor Flow in Android
Add one line to the build.gradle, and the Gradle just takes care of the rest. Under the library archive, holding Tensor Flow shared object is downloaded from JCenter, linked against the application automatically.
Add your model to the project
We need the pre-trained model and label file which does the object detection on a given image.You can download the model and unzip to get retrained_labels.txt (label for objects) and rounded_graph.pb (pre-trained model).Training of the model is not handled on the app side it can be trained on server side or a standalone system.
Put retrained_labels.txt and rounded_graph.pb into android/assets directory. At first, create Tensor Flow inference interface, opening the model file from the asset in the APK. Then, Set up the input feed using Feed API. On mobile, the input feed tends to be retrieved from various sensors like a camera, accelerometer, Then run the interface, finally, you can fetch the results using fetch method over there.You would notice that those calls are all blocking calls. So you’d want to run them in a worker thread, rather than the main thread because API would take a long time. Tensor Flow Classify opens your camera, and classifies whatever objects you show it. The really mind blowing thing is that this works totally offline , you do not need an internet connection.It prints out the object classification along with a confidence level (1.000 for perfect confidence, 0.000 for zero confidence). When your object fills most of the image, it often does pretty well.