1# TensorFlow Lite Android Image Classifier App Example
2
3This tutorial provides a simple Android mobile application to classify images
4using the Android device camera. In this tutorial, you will download the demo
5application from the Tensorflow repository, build it on your computer, and
6install it on your Android device. You will also learn how to customize the
7application to suit your requirements.
8
9### Prerequisites
10
11*   Android Studio 3.2 (installed on a Linux, Mac or Windows machine)
12
13*   Android device
14
15*   USB cable (to connect Android device to your computer)
16
17### Step 1. Clone the TensorFlow source code
18
19Clone the GitHub repository to your computer to get the demo application.
20
21```
22
23git clone https://github.com/tensorflow/tensorflow
24
25```
26
27Open the TensorFlow source code in Android Studio. To do this, open Android
28Studio and select `Open an existing project` setting the folder to
29`tensorflow/lite/examples/android`
30
31<img src="images/classifydemo_img1.png" />
32
33This folder contains the demo application for image classification, object
34detection, and speech hotword detection.
35
36### Step 2. Build the Android Studio project
37
38Select `Build -> Make Project` and check that the project builds
39successfully. You will need Android SDK configured in the settings. You'll need
40at least SDK version 23. The gradle file will prompt you to download any missing
41libraries.
42
43<img src="images/classifydemo_img4.png" style="width: 40%" />
44
45<img src="images/classifydemo_img2.png" style="width: 60%" />
46
47#### TensorFlow Lite AAR from JCenter:
48
49Note that the `build.gradle` is configured to use TensorFlow Lite's nightly
50build.
51
52If you see a build error related to compatibility with Tensorflow Lite's Java
53API (example: method X is undefined for type Interpreter), there has likely been
54a backwards compatible change to the API. You will need to pull new app code
55that's compatible with the nightly build by running `git pull`.
56
57### Step 3. Install and run the app
58
59Connect the Android device to the computer and be sure to approve any ADB
60permission prompts that appear on your phone. Select `Run -> Run app.` Select
61the deployment target in the connected devices to the device on which the app will
62be installed. This will install the app on the device.
63
64<img src="images/classifydemo_img5.png" style="width: 60%" />
65
66<img src="images/classifydemo_img6.png" style="width: 70%" />
67
68<img src="images/classifydemo_img7.png" style="width: 40%" />
69
70<img src="images/classifydemo_img8.png" style="width: 80%" />
71
72To test the app, open the app called `TFL Classify` on your device. When you run
73the app the first time, the app will request permission to access the camera.
74Re-installing the app may require you to uninstall the previous installations.
75
76## Understanding Android App Code
77
78### Get camera input
79
80This mobile application gets the camera input using the functions defined in the
81file CameraActivity.java in the folder
82`tensorflow/tensorflow/lite/examples/android/app/src/main/java/org/tensorflow/demo/CameraActivity.java.`
83This file depends on `AndroidManifest.xml` in the folder
84`tensorflow/tensorflow/lite/examples/android/app/src/main` to set the camera
85orientation.
86
87### Pre-process bitmap image
88
89The mobile application code that pre-processes the images and runs inference is
90in
91`tensorflow/tensorflow/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteImageClassifier.java.`
92Here, we take the input camera bitmap image and convert it to a Bytebuffer
93format for efficient processing. We pre-allocate the memory for ByteBuffer
94object based on the image dimensions because Bytebuffer objects can't infer the
95object shape.
96
97```
98c.imgData =
99ByteBuffer.allocateDirect( DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y *
100DIM_PIXEL_SIZE);
101c.imgData.order(ByteOrder.nativeOrder());
102```
103
104While running the application, we pre-process the incoming bitmap images from the
105camera to a Bytebuffer. Since this model is quantized 8-bit, we will put a
106single byte for each channel. `imgData` will contain an encoded `Color` for each
107pixel in ARGB format, so we need to mask the least significant 8 bits to get
108blue, and next 8 bits to get green and next 8 bits to get blue, and we have an
109opaque image so alpha can be ignored.
110
111```
112 imgData.rewind();
113 bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
114 // Convert the image to floating point.
115 int pixel = 0;
116 for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {
117   for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {
118     final int val = intValues[pixel++];
119     imgData.put((byte) ((val >> 16) & 0xFF));
120     imgData.put((byte) ((val >> 8) & 0xFF));
121     imgData.put((byte) (val & 0xFF));
122     }
123  }
124```
125
126### Create interpreter
127
128To create the interpreter, we need to load the model file. In Android devices,
129we recommend pre-loading and memory mapping the model file as shown below to
130offer faster load times and reduce the dirty pages in memory. If your model file
131is compressed, then you will have to load the model as a `File`, as it cannot be
132directly mapped and used from memory.
133
134```
135// Memory-map the model file
136AssetFileDescriptor fileDescriptor = assets.openFd(modelFilename);
137FileInputStream inputStream = new
138FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel =
139inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset();
140long declaredLength = fileDescriptor.getDeclaredLength(); return
141fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
142```
143
144Then, create the interpreter object using `new Interpreter()` that takes the
145model file as argument as shown below.
146
147```
148// Create Interpreter
149c.tfLite = new Interpreter(loadModelFile(assetManager, modelFilename));
150```
151
152### Run inference
153
154The output of the inference is stored in a byte array `labelprob.` We
155pre-allocate the memory for the output buffer. Then, we run inference on the
156interpreter object using function `run()` that takes input and output buffers as
157arguments.
158
159```
160// Pre-allocate output buffers.
161c.labelProb = new byte[1][c.labels.size()];
162// Run Inference
163tfLite.run(imgData, labelProb);
164```
165
166### Post-process values
167
168Finally, we find the best set of classifications by storing them in a priority
169queue based on their confidence scores.
170
171```
172// Find the best classifications
173PriorityQueue<Recognition> pq = ...
174for (int i = 0; i < labels.size(); ++i)
175{
176  pq.add( new Recognition( ' '+ i,
177  labels.size() > i ? labels.get(i) : unknown,
178  (float) labelProb[0][i], null));
179}
180```
181
182And we display up to MAX_RESULTS number of classifications in the application,
183where Recognition is a generic class defined in `Classifier.java` that contains
184the following information of the classified object: id, title, label, and its
185location when the model is an object detection model.
186
187```
188// Display the best classifications
189final ArrayList<Recognition> recognitions =
190  new ArrayList<Recognition>();
191int recognitionsSize = Math.min(pq.size(), MAX_RESULTS);
192for (int i = 0; i < recognitionsSize; ++i) {
193  recognitions.add(pq.poll());
194}
195```
196
197### Load onto display
198
199We render the results on the Android device screen using the following lines in
200`processImage()` function in `ClassifierActivity.java` which uses the UI defined
201in `RecognitionScoreView.java.`
202
203```
204resultsView.setResults(results);
205requestRender();
206```
207