Searched refs:QUANTIZED_UINT8 (Results 1 – 20 of 20) sorted by relevance
/external/tensorflow/tensorflow/lite/g3doc/convert/ |
D | cmdline_reference.md | 71 `QUANTIZED_UINT8`. 82 (`--inference_type=QUANTIZED_UINT8`), no dequantization is performed by 97 `--inference_input_type`). Must be `{FLOAT, QUANTIZED_UINT8}`. 106 * If `QUANTIZED_UINT8`, then real-numbers arrays will be quantized as 114 added immediately after the input array. Must be `{FLOAT, QUANTIZED_UINT8}`.
|
D | quantization.md | 32 converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
|
D | python_api.md | 152 `QUANTIZED_UINT8`. Run `help(tf.lite.TFLiteConverter)` in the Python 173 converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
|
D | cmdline_examples.md | 111 --inference_type=QUANTIZED_UINT8 \ 136 --inference_type=QUANTIZED_UINT8 \
|
/external/tensorflow/tensorflow/lite/python/ |
D | convert_test.py | 78 inference_type=lite_constants.QUANTIZED_UINT8, 92 inference_type=lite_constants.QUANTIZED_UINT8) 141 inference_type=lite_constants.QUANTIZED_UINT8, 186 inference_type=lite_constants.QUANTIZED_UINT8) 398 _types_pb2.QUANTIZED_UINT8)
|
D | tflite_convert.py | 60 return lite_constants.QUANTIZED_UINT8 140 if converter.inference_type == lite_constants.QUANTIZED_UINT8: 181 if converter.inference_type == lite_constants.QUANTIZED_UINT8:
|
D | convert.py | 65 dtypes.uint8: _types_pb2.QUANTIZED_UINT8, 354 if toco.inference_input_type == _types_pb2.QUANTIZED_UINT8: 406 if toco_flags.inference_input_type == _types_pb2.QUANTIZED_UINT8:
|
D | lite_constants.py | 30 QUANTIZED_UINT8 = dtypes.uint8 variable
|
D | lite_test.py | 158 converter.inference_type = lite_constants.QUANTIZED_UINT8 203 converter.inference_type = lite_constants.QUANTIZED_UINT8 430 converter.inference_input_type = lite_constants.QUANTIZED_UINT8 461 converter.inference_type = lite_constants.QUANTIZED_UINT8
|
D | lite.py | 747 if self.inference_type == constants.QUANTIZED_UINT8:
|
/external/tensorflow/tensorflow/tools/api/golden/v1/ |
D | tensorflow.lite.constants.pbtxt | 20 name: "QUANTIZED_UINT8"
|
/external/tensorflow/tensorflow/lite/python/testdata/ |
D | BUILD | 27 "--inference_type=QUANTIZED_UINT8",
|
/external/tensorflow/tensorflow/lite/toco/ |
D | types.proto | 27 QUANTIZED_UINT8 = 2; enumerator
|
D | model_flags.proto | 64 // When this data_type is quantized (e.g. QUANTIZED_UINT8), the 76 // between FLOAT and quantized types (e.g. QUANTIZED_UINT8).
|
D | toco_cmdline_flags.cc | 321 if (toco_flags->inference_type() == IODataType::QUANTIZED_UINT8) { in ReadTocoFlagsFromCommandLineFlags()
|
D | toco_tooling.cc | 246 (inference_type == QUANTIZED_UINT8 || inference_type == QUANTIZED_INT16); in TransformWithStatus()
|
D | toco_flags.proto | 83 // - If QUANTIZED_UINT8, then real-numbers arrays will be quantized
|
D | tooling_util.cc | 2258 case QUANTIZED_UINT8: in ConvertIODataTypeToArrayDataType()
|
/external/tensorflow/tensorflow/contrib/quantize/ |
D | README.md | 101 --inference_type=QUANTIZED_UINT8 \
|
/external/tensorflow/tensorflow/lite/experimental/micro/examples/micro_speech/ |
D | README.md | 99 --inference_type=QUANTIZED_UINT8 --mean_values=0 --std_values=2 \
|