• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..--

BUILDD23-Nov-20234.1 KiB162150

README.mdD23-Nov-20235.9 KiB118102

coreml_delegate_provider.ccD23-Nov-20233.4 KiB10072

default_execution_provider.ccD23-Nov-20233 KiB7448

delegate_provider.hD23-Nov-20233.7 KiB11066

external_delegate_provider.ccD23-Nov-20234.6 KiB12178

gpu_delegate_provider.ccD23-Nov-20236.8 KiB174144

hexagon_delegate_provider.ccD23-Nov-20233.7 KiB10778

nnapi_delegate_provider.ccD23-Nov-20238.4 KiB193154

xnnpack_delegate_provider.ccD23-Nov-20232.1 KiB6236

README.md

1# TFLite Delegate Utilities for Tooling
2
3## TFLite Delegate Registrar
4
5[A TFLite delegate registrar](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/delegates/delegate_provider.h)
6is provided here. The registrar keeps a list of TFLite delegate providers, each
7of which defines a list parameters that could be initialized from commandline
8arguments and provides a TFLite delegate instance creation based on those
9parameters. This delegate registrar has been used in TFLite evaluation tools and
10the benchmark model tool.
11
12A particular TFLite delegate provider can be used by
13linking the corresponding library, e.g. adding it to the `deps` of a BUILD rule.
14Note that each delegate provider library has been configured with
15`alwayslink=1` in the BUILD rule so that it will be linked to any binary that
16directly or indirectly depends on it.
17
18The following lists all implemented TFLite delegate providers and their
19corresponding list of parameters that each supports to create a particular
20TFLite delegate.
21
22### Common parameters
23*   `num_threads`: `int` (default=1) \
24    The number of threads to use for running the inference on CPU.
25*   `max_delegated_partitions`: `int` (default=0, i.e. no limit) \
26    The maximum number of partitions that will be delegated. \
27    Currently supported by the GPU, Hexagon, CoreML and NNAPI delegate.
28*   `min_nodes_per_partition`: `int` (default=delegate's own choice) \
29    The minimal number of TFLite graph nodes of a partition that needs to be
30    reached to be delegated. A negative value or 0 means to use the default
31    choice of each delegate. \
32    This option is currently supported by the Hexagon and CoreML delegate.
33
34### GPU delegate provider
35
36Only Android and iOS devices support GPU delegate.
37
38#### Common options
39*   `use_gpu`: `bool` (default=false) \
40    Whether to use the
41    [GPU accelerator delegate](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/gpu).
42*   `gpu_precision_loss_allowed`: `bool` (default=true) \
43    Whether to allow the GPU delegate to carry out computation with some
44    precision loss (i.e. processing in FP16) or not. If allowed, the performance
45    will increase.
46*   `gpu_experimental_enable_quant`: `bool` (default=true) \
47    Whether to allow the GPU delegate to run a 8-bit quantized model or not.
48
49#### Android options
50*  `gpu_backend`: `string` (default="") \
51    Force the GPU delegate to use a particular backend for execution, and fail
52    if unsuccessful. Should be one of: cl, gl. By default, the GPU delegate will
53    try OpenCL first and then OpenGL if the former fails.
54
55#### iOS options
56*   `gpu_wait_type`: `string` (default="") \
57    Which GPU wait_type option to use. Should be one of the following: passive,
58    active, do_not_wait, aggressive. When left blank, passive mode is used by
59    default.
60
61### NNAPI delegate provider
62*   `use_nnapi`: `bool` (default=false) \
63    Whether to use
64    [Android NNAPI](https://developer.android.com/ndk/guides/neuralnetworks/).
65    This API is available on recent Android devices. When on Android Q+, will
66    also print the names of NNAPI accelerators accessible through the
67    `nnapi_accelerator_name` flag.
68*   `nnapi_accelerator_name`: `string` (default="") \
69    The name of the NNAPI accelerator to use (requires Android Q+). If left
70    blank, NNAPI will automatically select which of the available accelerators
71    to use.
72*   `nnapi_execution_preference`: `string` (default="") \
73    Which
74    [NNAPI execution preference](https://developer.android.com/ndk/reference/group/neural-networks.html#group___neural_networks_1gga034380829226e2d980b2a7e63c992f18af727c25f1e2d8dcc693c477aef4ea5f5)
75    to use when executing using NNAPI. Should be one of the following:
76    fast_single_answer, sustained_speed, low_power, undefined.
77*   `nnapi_execution_priority`: `string` (default="") \
78    The relative priority for executions of the model in NNAPI. Should be one
79    of the following: default, low, medium and high. This option requires
80    Android 11+.
81*   `disable_nnapi_cpu`: `bool` (default=true) \
82    Excludes the
83    [NNAPI CPU reference implementation](https://developer.android.com/ndk/guides/neuralnetworks#device-assignment)
84    from the possible devices to be used by NNAPI to execute the model. This
85    option is ignored if `nnapi_accelerator_name` is specified.
86*   `nnapi_allow_fp16`: `bool` (default=false) \
87    Whether to allow FP32 computation to be run in FP16.
88
89### Hexagon delegate provider
90*   `use_hexagon`: `bool` (default=false) \
91    Whether to use the Hexagon delegate. Not all devices may support the Hexagon
92    delegate, refer to the [TensorFlow Lite documentation](https://www.tensorflow.org/lite/performance/hexagon_delegate) for more
93    information about which devices/chipsets are supported and about how to get
94    the required libraries. To use the Hexagon delegate also build the
95    hexagon_nn:libhexagon_interface.so target and copy the library to the
96    device. All libraries should be copied to /data/local/tmp on the device.
97*   `hexagon_profiling`: `bool` (default=false) \
98    Whether to profile ops running on hexagon.
99
100### XNNPACK delegate provider
101*   `use_xnnpack`: `bool` (default=false) \
102    Whether to use the XNNPack delegate.
103
104### CoreML delegate provider
105*   `use_coreml`: `bool` (default=false) \
106    Whether to use the [Core ML delegate](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/coreml).
107    This option is only available in iOS.
108*   `coreml_version`: `int` (default=0) \
109    Target Core ML version for model conversion. The default value is 0 and it
110    means using the newest version that's available on the device.
111
112### External delegate provider
113*   `external_delegate_path`: `string` (default="") \
114    Path to the external delegate library to use.
115*   `external_delegate_options`: `string` (default="") \
116    A list of options to be passed to the external delegate library. Options
117    should be in the format of `option1:value1;option2:value2;optionN:valueN`
118