Home
last modified time | relevance | path

Searched refs:post_training_quantize (Results 1 – 14 of 14) sorted by relevance

/external/tensorflow/tensorflow/lite/toco/
Dtoco_cmdline_flags.cc164 Flag("post_training_quantize", parsed_flags.post_training_quantize.bind(), in ParseTocoFlagsFromCommandLineFlags()
165 parsed_flags.post_training_quantize.default_value(), in ParseTocoFlagsFromCommandLineFlags()
269 READ_TOCO_FLAG(post_training_quantize, FlagRequirement::kNone); in ReadTocoFlagsFromCommandLineFlags()
Dargs.h174 Arg<bool> post_training_quantize = Arg<bool>(false); member
Dtoco_flags.proto176 // DEPRECATED: Please use post_training_quantize instead.
191 optional bool post_training_quantize = 26 [default = false]; field
Dtoco_tooling.cc452 params.quantize_weights = toco_flags.post_training_quantize(); in Export()
/external/tensorflow/tensorflow/lite/python/
Dtflite_convert.py179 if flags.post_training_quantize:
180 converter.post_training_quantize = flags.post_training_quantize
Dconvert.py245 post_training_quantize=False, argument
333 toco.post_training_quantize = post_training_quantize
Dlite_test.py498 self.assertFalse(quantized_converter.post_training_quantize)
500 quantized_converter.post_training_quantize = True
501 self.assertTrue(quantized_converter.post_training_quantize)
/external/tensorflow/tensorflow/lite/tools/optimize/g3doc/
Dquantize_weights.md9 `--post_training_quantize` flag to your original tflite_convert invocation. For
16 --post_training_quantize
/external/tensorflow/tensorflow/lite/testing/model_coverage/
Dmodel_coverage_lib.py87 converter.post_training_quantize = kwargs["post_training_quantize"]
292 converter, post_training_quantize=True, **kwargs)
Dmodel_coverage_lib_test.py161 model_coverage.test_keras_model(keras_file, post_training_quantize=True)
/external/tensorflow/tensorflow/lite/experimental/examples/lstm/g3doc/
DREADME.md63 converter.post_training_quantize = True # If post training quantize is desired.
97 converter.post_training_quantize = use_post_training_quantize
299 converter.post_training_quantize = use_post_training_quantize
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dget_started.md245 simply enable the ‘post_training_quantize’ flag in the TensorFlow Lite
251 converter.post_training_quantize=True
/external/tensorflow/tensorflow/lite/g3doc/convert/
Dcmdline_reference.md143 * `--post_training_quantize`. Type: boolean. Default: False. Boolean
/external/tensorflow/tensorflow/lite/g3doc/r2/convert/
Dpython_api.md178 * `post_training_quantize` - Deprecated in the 1.X API