1# Get started with microcontrollers
2
3This document explains how to train a model and run inference using a
4microcontroller.
5
6## The Hello World example
7
8The
9[Hello World](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world)
10example is designed to demonstrate the absolute basics of using TensorFlow Lite
11for Microcontrollers. We train and run a model that replicates a sine function,
12i.e, it takes a single number as its input, and outputs the number's
13[sine](https://en.wikipedia.org/wiki/Sine) value. When deployed to the
14microcontroller, its predictions are used to either blink LEDs or control an
15animation.
16
17The end-to-end workflow involves the following steps:
18
191.  [Train a model](#train-a-model) (in Python): A jupyter notebook to train,
20    convert and optimize a model for on-device use.
212.  [Run inference](#run-inference) (in C++ 11): An end-to-end unit test that
22    runs inference on the model using the [C++ library](library.md).
23
24## Get a supported device
25
26The example application we'll be using has been tested on the following devices:
27
28*   [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
29    (using Arduino IDE)
30*   [SparkFun Edge](https://www.sparkfun.com/products/15170) (building directly
31    from source)
32*   [STM32F746 Discovery kit](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
33    (using Mbed)
34*   [Adafruit EdgeBadge](https://www.adafruit.com/product/4400) (using Arduino
35    IDE)
36*   [Adafruit TensorFlow Lite for Microcontrollers Kit](https://www.adafruit.com/product/4317)
37    (using Arduino IDE)
38*   [Adafruit Circuit Playground Bluefruit](https://learn.adafruit.com/tensorflow-lite-for-circuit-playground-bluefruit-quickstart?view=all)
39    (using Arduino IDE)
40*   [Espressif ESP32-DevKitC](https://www.espressif.com/en/products/hardware/esp32-devkitc/overview)
41    (using ESP IDF)
42*   [Espressif ESP-EYE](https://www.espressif.com/en/products/hardware/esp-eye/overview)
43    (using ESP IDF)
44
45Learn more about supported platforms in
46[TensorFlow Lite for Microcontrollers](index.md).
47
48## Train a model
49
50Note: You can skip this section and use the trained model included in the
51example code.
52
53Use Google colaboratory to
54[train your own model](https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb).
55For more details, refer to the `README.md`:
56
57<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world/train/README.md">Hello
58World Training README.md</a>
59
60## Run inference
61
62To run the model on your device, we will walk through the instructions in the
63`README.md`:
64
65<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world/README.md">Hello
66World README.md</a>
67
68The following sections walk through the example's
69[`hello_world_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/hello_world_test.cc),
70unit test which demonstrates how to run inference using TensorFlow Lite for
71Microcontrollers. It loads the model and runs inference several times.
72
73### 1. Include the library headers
74
75To use the TensorFlow Lite for Microcontrollers library, we must include the
76following header files:
77
78```C++
79#include "tensorflow/lite/micro/all_ops_resolver.h"
80#include "tensorflow/lite/micro/micro_error_reporter.h"
81#include "tensorflow/lite/micro/micro_interpreter.h"
82#include "tensorflow/lite/schema/schema_generated.h"
83#include "tensorflow/lite/version.h"
84```
85
86-   [`all_ops_resolver.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/all_ops_resolver.h)
87    provides the operations used by the interpreter to run the model.
88-   [`micro_error_reporter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/micro_error_reporter.h)
89    outputs debug information.
90-   [`micro_interpreter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/micro_interpreter.h)
91    contains code to load and run models.
92-   [`schema_generated.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_generated.h)
93    contains the schema for the TensorFlow Lite
94    [`FlatBuffer`](https://google.github.io/flatbuffers/) model file format.
95-   [`version.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/version.h)
96    provides versioning information for the TensorFlow Lite schema.
97
98### 2. Include the model header
99
100The TensorFlow Lite for Microcontrollers interpreter expects the model to be
101provided as a C++ array. The model is defined in `model.h` and `model.cc` files.
102The header is included with the following line:
103
104```C++
105#include "tensorflow/lite/micro/examples/hello_world/model.h"
106```
107
108### 3. Include the unit test framework header
109
110In order to create a unit test, we include the TensorFlow Lite for
111Microcontrollers unit test framework by including the following line:
112
113```C++
114#include "tensorflow/lite/micro/testing/micro_test.h"
115```
116
117The test is defined using the following macros:
118
119```C++
120TF_LITE_MICRO_TESTS_BEGIN
121
122TF_LITE_MICRO_TEST(LoadModelAndPerformInference) {
123  . // add code here
124  .
125}
126
127TF_LITE_MICRO_TESTS_END
128```
129
130We now discuss the code included in the macro above.
131
132### 4. Set up logging
133
134To set up logging, a `tflite::ErrorReporter` pointer is created using a pointer
135to a `tflite::MicroErrorReporter` instance:
136
137```C++
138tflite::MicroErrorReporter micro_error_reporter;
139tflite::ErrorReporter* error_reporter = &micro_error_reporter;
140```
141
142This variable will be passed into the interpreter, which allows it to write
143logs. Since microcontrollers often have a variety of mechanisms for logging, the
144implementation of `tflite::MicroErrorReporter` is designed to be customized for
145your particular device.
146
147### 5. Load a model
148
149In the following code, the model is instantiated using data from a `char` array,
150`g_model`, which is declared in `model.h`. We then check the model to ensure its
151schema version is compatible with the version we are using:
152
153```C++
154const tflite::Model* model = ::tflite::GetModel(g_model);
155if (model->version() != TFLITE_SCHEMA_VERSION) {
156  TF_LITE_REPORT_ERROR(error_reporter,
157      "Model provided is schema version %d not equal "
158      "to supported version %d.\n",
159      model->version(), TFLITE_SCHEMA_VERSION);
160}
161```
162
163### 6. Instantiate operations resolver
164
165An
166[`AllOpsResolver`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/all_ops_resolver.h)
167instance is declared. This will be used by the interpreter to access the
168operations that are used by the model:
169
170```C++
171tflite::AllOpsResolver resolver;
172```
173
174The `AllOpsResolver` loads all of the operations available in TensorFlow Lite
175for Microcontrollers, which uses a lot of memory. Since a given model will only
176use a subset of these operations, it's recommended that real world applications
177load only the operations that are needed.
178
179This is done using a different class, `MicroMutableOpResolver`. You can see how
180to use it in the *Micro speech* example's
181[`micro_speech_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/micro_speech_test.cc).
182
183### 7. Allocate memory
184
185We need to preallocate a certain amount of memory for input, output, and
186intermediate arrays. This is provided as a `uint8_t` array of size
187`tensor_arena_size`:
188
189```C++
190const int tensor_arena_size = 2 * 1024;
191uint8_t tensor_arena[tensor_arena_size];
192```
193
194The size required will depend on the model you are using, and may need to be
195determined by experimentation.
196
197### 8. Instantiate interpreter
198
199We create a `tflite::MicroInterpreter` instance, passing in the variables
200created earlier:
201
202```C++
203tflite::MicroInterpreter interpreter(model, resolver, tensor_arena,
204                                     tensor_arena_size, error_reporter);
205```
206
207### 9. Allocate tensors
208
209We tell the interpreter to allocate memory from the `tensor_arena` for the
210model's tensors:
211
212```C++
213interpreter.AllocateTensors();
214```
215
216### 10. Validate input shape
217
218The `MicroInterpreter` instance can provide us with a pointer to the model's
219input tensor by calling `.input(0)`, where `0` represents the first (and only)
220input tensor:
221
222```C++
223  // Obtain a pointer to the model's input tensor
224  TfLiteTensor* input = interpreter.input(0);
225```
226
227We then inspect this tensor to confirm that its shape and type are what we are
228expecting:
229
230```C++
231// Make sure the input has the properties we expect
232TF_LITE_MICRO_EXPECT_NE(nullptr, input);
233// The property "dims" tells us the tensor's shape. It has one element for
234// each dimension. Our input is a 2D tensor containing 1 element, so "dims"
235// should have size 2.
236TF_LITE_MICRO_EXPECT_EQ(2, input->dims->size);
237// The value of each element gives the length of the corresponding tensor.
238// We should expect two single element tensors (one is contained within the
239// other).
240TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
241TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
242// The input is a 32 bit floating point value
243TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, input->type);
244```
245
246The enum value `kTfLiteFloat32` is a reference to one of the TensorFlow Lite
247data types, and is defined in
248[`common.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h).
249
250### 11. Provide an input value
251
252To provide an input to the model, we set the contents of the input tensor, as
253follows:
254
255```C++
256input->data.f[0] = 0.;
257```
258
259In this case, we input a floating point value representing `0`.
260
261### 12. Run the model
262
263To run the model, we can call `Invoke()` on our `tflite::MicroInterpreter`
264instance:
265
266```C++
267TfLiteStatus invoke_status = interpreter.Invoke();
268if (invoke_status != kTfLiteOk) {
269  TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed\n");
270}
271```
272
273We can check the return value, a `TfLiteStatus`, to determine if the run was
274successful. The possible values of `TfLiteStatus`, defined in
275[`common.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h),
276are `kTfLiteOk` and `kTfLiteError`.
277
278The following code asserts that the value is `kTfLiteOk`, meaning inference was
279successfully run.
280
281```C++
282TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, invoke_status);
283```
284
285### 13. Obtain the output
286
287The model's output tensor can be obtained by calling `output(0)` on the
288`tflite::MicroInterpreter`, where `0` represents the first (and only) output
289tensor.
290
291In the example, the model's output is a single floating point value contained
292within a 2D tensor:
293
294```C++
295TfLiteTensor* output = interpreter.output(0);
296TF_LITE_MICRO_EXPECT_EQ(2, output->dims->size);
297TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
298TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
299TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, output->type);
300```
301
302We can read the value directly from the output tensor and assert that it is what
303we expect:
304
305```C++
306// Obtain the output value from the tensor
307float value = output->data.f[0];
308// Check that the output value is within 0.05 of the expected value
309TF_LITE_MICRO_EXPECT_NEAR(0., value, 0.05);
310```
311
312### 14. Run inference again
313
314The remainder of the code runs inference several more times. In each instance,
315we assign a value to the input tensor, invoke the interpreter, and read the
316result from the output tensor:
317
318```C++
319input->data.f[0] = 1.;
320interpreter.Invoke();
321value = output->data.f[0];
322TF_LITE_MICRO_EXPECT_NEAR(0.841, value, 0.05);
323
324input->data.f[0] = 3.;
325interpreter.Invoke();
326value = output->data.f[0];
327TF_LITE_MICRO_EXPECT_NEAR(0.141, value, 0.05);
328
329input->data.f[0] = 5.;
330interpreter.Invoke();
331value = output->data.f[0];
332TF_LITE_MICRO_EXPECT_NEAR(-0.959, value, 0.05);
333```
334
335### 15. Read the application code
336
337Once you have walked through this unit test, you should be able to understand
338the example's application code, located in
339[`main_functions.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/main_functions.cc).
340It follows a similar process, but generates an input value based on how many
341inferences have been run, and calls a device-specific function that displays the
342model's output to the user.
343