/external/desugar/test/java/com/google/devtools/build/android/desugar/ |
D | ByteCodeTypePrinter.java | 88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local 89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod() 90 inference.setDelegateMethodVisitor(mv); in visitMethod() 92 return inference; in visitMethod() 109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper 114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument 116 this.inference = inference; in MethodIrTypeDumper() 121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack() 122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
|
/external/tensorflow/tensorflow/lite/g3doc/guide/ |
D | inference.md | 1 # TensorFlow Lite inference 7 TensorFlow Lite inference is the process of executing a TensorFlow Lite 18 TensorFlow Lite inference on device typically follows the following steps. 38 The user retrieves results from model inference and interprets the tensors in 46 TensorFlow inference APIs are provided for most common mobile/embedded platforms 50 On Android, TensorFlow Lite inference can be performed using either Java or C++ 57 TensorFlow Lite provides Swift/Objective C++ APIs for inference on iOS. An 62 and Python APIs can be used to run inference. 70 use. TensorFlow Lite is designed for fast inference on small devices so it 109 TensorFlow Lite's Java API supports on-device inference and is provided as an [all …]
|
D | get_started.md | 52 both floating point and quantized inference. 152 the arguments for specifying the output nodes for inference in the 161 convert to a mobile inference graph. 183 ## 3. Use the TensorFlow Lite model for inference in a mobile app 190 a JNI library is provided as an interface. This is only meant for inference—it 267 Another benefit with GPU inference is its power efficiency. GPUs carry out the
|
/external/tensorflow/tensorflow/lite/g3doc/performance/ |
D | gpu_advanced.md | 16 parallelism typically results in lower latency. In the best scenario, inference 29 Another benefit that comes with GPU inference is its power efficiency. A GPU 73 // Run inference. 106 // Run inference. 218 // Run inference; the null input argument indicates use of the bound buffer for input. 251 // Run inference; the null output argument indicates use of the bound buffer for output. 263 `Interpreter::ModifyGraphWithDelegate()`. Additionally, the inference output is, 276 // Run inference. 280 Note: Once the default behavior is turned off, copying the inference output from 303 optimization for on-device inference.
|
D | gpu.md | 11 resulting in lower latency. In the best scenario, inference on the GPU may now 17 Another benefit with GPU inference is its power efficiency. GPUs carry out the 148 // Run inference 174 // Run inference 195 …t detection](https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html) [[dow… 236 on-device inference.
|
/external/tensorflow/tensorflow/lite/g3doc/convert/ |
D | cmdline_reference.md | 77 * When performing float inference (`--inference_type=FLOAT`) on a 79 the inference code according to the above formula, before proceeding 80 with float inference. 81 * When performing quantized inference 83 the inference code. However, the quantization parameters of all arrays, 86 quantized inference code. `mean_value` must be an integer when 87 performing quantized inference. 117 requiring floating-point inference. For such image models, the uint8 input
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_FusedBatchNorm.pbtxt | 24 A 1D Tensor for population mean. Used for inference only; 31 A 1D Tensor for population variance. Used for inference only; 91 or inference.
|
D | api_def_FusedBatchNormV2.pbtxt | 24 A 1D Tensor for population mean. Used for inference only; 31 A 1D Tensor for population variance. Used for inference only; 97 or inference.
|
D | api_def_TPUOrdinalSelector.pbtxt | 13 (for regular inference) to execute the TPU program on. The output is
|
/external/tensorflow/tensorflow/compiler/xla/service/ |
D | hlo_get_dimension_size_rewriter.cc | 62 TF_ASSIGN_OR_RETURN(DynamicDimensionInference inference, in Run() 68 ReplaceGetSize(instruction, &inference)); in Run()
|
/external/desugar/java/com/google/devtools/build/android/desugar/ |
D | TryWithResourcesRewriter.java | 202 BytecodeTypeInference inference = null; in visitMethod() local 209 inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() 210 inference.setDelegateMethodVisitor(visitor); in visitMethod() 211 visitor = inference; in visitMethod() 215 new TryWithResourceVisitor(internalName, name + desc, visitor, classLoader, inference); in visitMethod()
|
/external/parameter-framework/upstream/doc/requirements/ |
D | APIs.md | 136 Exports the "Domains" (aka "Settings") which is the inference engine's data. 141 Imports previously-exported data into the inference engine. See [req-deserializable]. 145 Exports a given part of the inference engine data. See [Serialization of individual data]. 149 Imports a partial inference engine data as previously exported. See section
|
/external/tensorflow/tensorflow/lite/schema/ |
D | BUILD | 63 # Generic schema for inference on device. 69 # Generic schema for inference on device (but with reflections makes bigger).
|
/external/tensorflow/tensorflow/contrib/session_bundle/ |
D | README.md | 12 [TensorFlow](https://www.tensorflow.org/) models for inference. 70 Graphs used for inference tasks typically have set of inputs and outputs used at 71 inference time. We call this a 'Signature'. 75 Graphs used for standard inference tasks have standard sets of inputs and 192 parameter, `target_node_names` is typically null at inference time. The last 207 needed for both training and inference. 263 3. [Optional] Build inference graph I.
|
/external/tensorflow/tensorflow/contrib/quantize/ |
D | README.md | 4 for both training and inference. There are two aspects to this: 6 * Operator fusion at inference time are accurately modeled at training time. 7 * Quantization effects at inference are modeled at training time. 9 For efficient inference, TensorFlow combines batch normalization with the preceding 24 converted to a fixed point inference model with little effort, eliminating the
|
/external/tensorflow/tensorflow/lite/toco/ |
D | toco_flags.proto | 26 // Tensorflow's mobile inference model. 60 // inference. For such image models, the uint8 input is quantized, i.e. 96 // to estimate the performance of quantized inference, without caring about 121 // transformations, in order to ensure that quantized inference has the 127 // transformations that are necessary in order to generate inference 131 // at the cost of no longer faithfully matching inference and training
|
/external/tensorflow/tensorflow/core/protobuf/tpu/ |
D | tpu_embedding_configuration.proto | 25 // Mode. Should the embedding layer program be run for inference (just forward 39 // Number of TPU hosts used for inference/training. 42 // Number of TensorCore used for inference/training.
|
/external/tensorflow/tensorflow/lite/tools/benchmark/android/ |
D | README.md | 62 adb logcat | grep "Average inference" 64 ... tflite : Average inference timings in us: Warmup: 91471, Init: 4108, Inference: 80660.1
|
/external/apache-commons-math/src/main/java/org/apache/commons/math/stat/inference/ |
D | UnknownDistributionChiSquareTest.java | 17 package org.apache.commons.math.stat.inference;
|
D | OneWayAnova.java | 17 package org.apache.commons.math.stat.inference;
|
D | ChiSquareTest.java | 17 package org.apache.commons.math.stat.inference;
|
/external/tensorflow/tensorflow/lite/tools/optimize/g3doc/ |
D | quantize_weights.md | 37 latency for "hybrid" kernels. In this mode the inference type is still FLOAT 42 float32 during inference to allow original float32 kernels to run. Since we
|
/external/tensorflow/tensorflow/lite/g3doc/models/image_classification/ |
D | android.md | 89 The mobile application code that pre-processes the images and runs inference is 152 ### Run inference 154 The output of the inference is stored in a byte array `labelprob.` We 155 pre-allocate the memory for the output buffer. Then, we run inference on the
|
/external/tensorflow/tensorflow/examples/multibox_detector/ |
D | BUILD | 2 # TensorFlow C++ inference example for labeling images.
|
/external/tensorflow/tensorflow/lite/ |
D | README.md | 4 devices. It enables low-latency inference of on-device machine learning models
|