1# Process input and output data with the TensorFlow Lite Support Library
2
3Note: TensorFlow Lite Support Library currently only supports Android.
4
5Mobile application developers typically interact with typed objects such as
6bitmaps or primitives such as integers. However, the TensorFlow Lite Interpreter
7that runs the on-device machine learning model uses tensors in the form of
8ByteBuffer, which can be difficult to debug and manipulate. The
9[TensorFlow Lite Android Support Library](https://github.com/tensorflow/tflite-support/tree/master/tensorflow_lite_support/java)
10is designed to help process the input and output of TensorFlow Lite models, and
11make the TensorFlow Lite interpreter easier to use.
12
13## Getting Started
14
15### Import Gradle dependency and other settings
16
17Copy the `.tflite` model file to the assets directory of the Android module
18where the model will be run. Specify that the file should not be compressed, and
19add the TensorFlow Lite library to the module’s `build.gradle` file:
20
21```java
22android {
23    // Other settings
24
25    // Specify tflite file should not be compressed for the app apk
26    aaptOptions {
27        noCompress "tflite"
28    }
29
30}
31
32dependencies {
33    // Other dependencies
34
35    // Import tflite dependencies
36    implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
37    // The GPU delegate library is optional. Depend on it as needed.
38    implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT'
39    implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly-SNAPSHOT'
40}
41```
42
43Explore the
44[TensorFlow Lite Support Library AAR hosted at JCenter](https://bintray.com/google/tensorflow/tensorflow-lite-support)
45for different versions of the Support Library.
46
47### Basic image manipulation and conversion
48
49The TensorFlow Lite Support Library has a suite of basic image manipulation
50methods such as crop and resize. To use it, create an `ImagePreprocessor` and
51add the required operations. To convert the image into the tensor format
52required by the TensorFlow Lite interpreter, create a `TensorImage` to be used
53as input:
54
55```java
56import org.tensorflow.lite.support.image.ImageProcessor;
57import org.tensorflow.lite.support.image.TensorImage;
58import org.tensorflow.lite.support.image.ops.ResizeOp;
59
60// Initialization code
61// Create an ImageProcessor with all ops required. For more ops, please
62// refer to the ImageProcessor Architecture section in this README.
63ImageProcessor imageProcessor =
64    new ImageProcessor.Builder()
65        .add(new ResizeOp(224, 224, ResizeOp.ResizeMethod.BILINEAR))
66        .build();
67
68// Create a TensorImage object. This creates the tensor of the corresponding
69// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
70TensorImage tImage = new TensorImage(DataType.UINT8);
71
72// Analysis code for every frame
73// Preprocess the image
74tImage.load(bitmap);
75tImage = imageProcessor.process(tImage);
76```
77
78`DataType` of a tensor can be read through the
79[metadata exractor library](../convert/metadata.md#read-the-metadata-from-models)
80as well as other model information.
81
82### Create output objects and run the model
83
84Before running the model, we need to create the container objects that will
85store the result:
86
87```java
88import org.tensorflow.lite.support.tensorbuffer.TensorBuffer;
89
90// Create a container for the result and specify that this is a quantized model.
91// Hence, the 'DataType' is defined as UINT8 (8-bit unsigned integer)
92TensorBuffer probabilityBuffer =
93    TensorBuffer.createFixedSize(new int[]{1, 1001}, DataType.UINT8);
94```
95
96Loading the model and running inference:
97
98```java
99import org.tensorflow.lite.support.model.Model;
100
101// Initialise the model
102try{
103    MappedByteBuffer tfliteModel
104        = FileUtil.loadMappedFile(activity,
105            "mobilenet_v1_1.0_224_quant.tflite");
106    Interpreter tflite = new Interpreter(tfliteModel)
107} catch (IOException e){
108    Log.e("tfliteSupport", "Error reading model", e);
109}
110
111// Running inference
112if(null != tflite) {
113    tflite.run(tImage.getBuffer(), probabilityBuffer.getBuffer());
114}
115```
116
117### Accessing the result
118
119Developers can access the output directly through
120`probabilityBuffer.getFloatArray()`. If the model produces a quantized output,
121remember to convert the result. For the MobileNet quantized model, the developer
122needs to divide each output value by 255 to obtain the probability ranging from
1230 (least likely) to 1 (most likely) for each category.
124
125### Optional: Mapping results to labels
126
127Developers can also optionally map the results to labels. First, copy the text
128file containing labels into the module’s assets directory. Next, load the label
129file using the following code:
130
131```java
132import org.tensorflow.lite.support.common.FileUtil;
133
134final String ASSOCIATED_AXIS_LABELS = "labels.txt";
135List<String> associatedAxisLabels = null;
136
137try {
138    associatedAxisLabels = FileUtil.loadLabels(this, ASSOCIATED_AXIS_LABELS);
139} catch (IOException e) {
140    Log.e("tfliteSupport", "Error reading label file", e);
141}
142```
143
144The following snippet demonstrates how to associate the probabilities with
145category labels:
146
147```java
148import org.tensorflow.lite.support.common.TensorProcessor;
149import org.tensorflow.lite.support.label.TensorLabel;
150
151// Post-processor which dequantize the result
152TensorProcessor probabilityProcessor =
153    new TensorProcessor.Builder().add(new NormalizeOp(0, 255)).build();
154
155if (null != associatedAxisLabels) {
156    // Map of labels and their corresponding probability
157    TensorLabel labels = new TensorLabel(associatedAxisLabels,
158        probabilityProcessor.process(probabilityBuffer));
159
160    // Create a map to access the result based on label
161    Map<String, Float> floatMap = labels.getMapWithFloatValue();
162}
163```
164
165## Current use-case coverage
166
167The current version of the TensorFlow Lite Support Library covers:
168
169*   common data types (float, uint8, images and array of these objects) as
170    inputs and outputs of tflite models.
171*   basic image operations (crop image, resize and rotate).
172*   normalization and quantization
173*   file utils
174
175Future versions will improve support for text-related applications.
176
177## ImageProcessor Architecture
178
179The design of the `ImageProcessor` allowed the image manipulation operations to
180be defined up front and optimised during the build process. The `ImageProcessor`
181currently supports three basic preprocessing operations:
182
183```java
184int width = bitmap.getWidth();
185int height = bitmap.getHeight();
186
187int size = height > width ? width : height;
188
189ImageProcessor imageProcessor =
190    new ImageProcessor.Builder()
191        // Center crop the image to the largest square possible
192        .add(new ResizeWithCropOrPadOp(size, size))
193        // Resize using Bilinear or Nearest neighbour
194        .add(new ResizeOp(224, 224, ResizeOp.ResizeMethod.BILINEAR));
195        // Rotation counter-clockwise in 90 degree increments
196        .add(new Rot90Op(rotateDegrees / 90))
197        .add(new NormalizeOp(127.5, 127.5))
198        .add(new QuantizeOp(128.0, 1/128.0))
199        .build();
200```
201
202See more details
203[here](../convert/metadata.md#normalization-and-quantization-parameters) about
204normalization and quantization.
205
206The eventual goal of the support library is to support all
207[`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image)
208transformations. This means the transformation will be the same as TensorFlow
209and the implementation will be independent of the operating system.
210
211Developers are also welcome to create custom processors. It is important in
212these cases to be aligned with the training process - i.e. the same
213preprocessing should apply to both training and inference to increase
214reproducibility.
215
216## Quantization
217
218When initiating input or output objects such as `TensorImage` or `TensorBuffer`
219you need to specify their types to be `DataType.UINT8` or `DataType.FLOAT32`.
220
221```java
222TensorImage tImage = new TensorImage(DataType.UINT8);
223TensorBuffer probabilityBuffer =
224    TensorBuffer.createFixedSize(new int[]{1, 1001}, DataType.UINT8);
225```
226
227The `TensorProcessor` can be used to quantize input tensors or dequantize output
228tensors. For example, when processing a quantized output `TensorBuffer`, the
229developer can use `DequantizeOp` to dequantize the result to a floating point
230probability between 0 and 1:
231
232```java
233import org.tensorflow.lite.support.common.TensorProcessor;
234
235// Post-processor which dequantize the result
236TensorProcessor probabilityProcessor =
237    new TensorProcessor.Builder().add(new DequantizeOp(0, 1/255.0)).build();
238TensorBuffer dequantizedBuffer = probabilityProcessor.process(probabilityBuffer);
239```
240
241The quantization parameters of a tensor can be read through the
242[metadata exractor library](../convert/metadata.md#read-the-metadata-from-models).
243