Home
last modified time | relevance | path

Searched refs:inference (Results 1 – 25 of 184) sorted by relevance

12345678

/external/desugar/test/java/com/google/devtools/build/android/desugar/
DByteCodeTypePrinter.java88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local
89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod()
90 inference.setDelegateMethodVisitor(mv); in visitMethod()
92 return inference; in visitMethod()
109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper
114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument
116 this.inference = inference; in MethodIrTypeDumper()
121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack()
122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dinference.md1 # TensorFlow Lite inference
7 TensorFlow Lite inference is the process of executing a TensorFlow Lite
18 TensorFlow Lite inference on device typically follows the following steps.
38 The user retrieves results from model inference and interprets the tensors in
46 TensorFlow inference APIs are provided for most common mobile/embedded platforms
50 On Android, TensorFlow Lite inference can be performed using either Java or C++
57 TensorFlow Lite provides Swift/Objective C++ APIs for inference on iOS. An
62 and Python APIs can be used to run inference.
70 use. TensorFlow Lite is designed for fast inference on small devices so it
109 TensorFlow Lite's Java API supports on-device inference and is provided as an
[all …]
Dget_started.md52 both floating point and quantized inference.
152 the arguments for specifying the output nodes for inference in the
161 convert to a mobile inference graph.
183 ## 3. Use the TensorFlow Lite model for inference in a mobile app
190 a JNI library is provided as an interface. This is only meant for inference—it
267 Another benefit with GPU inference is its power efficiency. GPUs carry out the
/external/tensorflow/tensorflow/lite/g3doc/performance/
Dgpu_advanced.md16 parallelism typically results in lower latency. In the best scenario, inference
29 Another benefit that comes with GPU inference is its power efficiency. A GPU
73 // Run inference.
106 // Run inference.
218 // Run inference; the null input argument indicates use of the bound buffer for input.
251 // Run inference; the null output argument indicates use of the bound buffer for output.
263 `Interpreter::ModifyGraphWithDelegate()`. Additionally, the inference output is,
276 // Run inference.
280 Note: Once the default behavior is turned off, copying the inference output from
303 optimization for on-device inference.
Dgpu.md11 resulting in lower latency. In the best scenario, inference on the GPU may now
17 Another benefit with GPU inference is its power efficiency. GPUs carry out the
148 // Run inference
174 // Run inference
195 …t detection](https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html) [[dow…
236 on-device inference.
/external/tensorflow/tensorflow/lite/g3doc/convert/
Dcmdline_reference.md77 * When performing float inference (`--inference_type=FLOAT`) on a
79 the inference code according to the above formula, before proceeding
80 with float inference.
81 * When performing quantized inference
83 the inference code. However, the quantization parameters of all arrays,
86 quantized inference code. `mean_value` must be an integer when
87 performing quantized inference.
117 requiring floating-point inference. For such image models, the uint8 input
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_FusedBatchNorm.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
91 or inference.
Dapi_def_FusedBatchNormV2.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
97 or inference.
Dapi_def_TPUOrdinalSelector.pbtxt13 (for regular inference) to execute the TPU program on. The output is
/external/tensorflow/tensorflow/compiler/xla/service/
Dhlo_get_dimension_size_rewriter.cc62 TF_ASSIGN_OR_RETURN(DynamicDimensionInference inference, in Run()
68 ReplaceGetSize(instruction, &inference)); in Run()
/external/desugar/java/com/google/devtools/build/android/desugar/
DTryWithResourcesRewriter.java202 BytecodeTypeInference inference = null; in visitMethod() local
209 inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod()
210 inference.setDelegateMethodVisitor(visitor); in visitMethod()
211 visitor = inference; in visitMethod()
215 new TryWithResourceVisitor(internalName, name + desc, visitor, classLoader, inference); in visitMethod()
/external/parameter-framework/upstream/doc/requirements/
DAPIs.md136 Exports the "Domains" (aka "Settings") which is the inference engine's data.
141 Imports previously-exported data into the inference engine. See [req-deserializable].
145 Exports a given part of the inference engine data. See [Serialization of individual data].
149 Imports a partial inference engine data as previously exported. See section
/external/tensorflow/tensorflow/lite/schema/
DBUILD63 # Generic schema for inference on device.
69 # Generic schema for inference on device (but with reflections makes bigger).
/external/tensorflow/tensorflow/contrib/session_bundle/
DREADME.md12 [TensorFlow](https://www.tensorflow.org/) models for inference.
70 Graphs used for inference tasks typically have set of inputs and outputs used at
71 inference time. We call this a 'Signature'.
75 Graphs used for standard inference tasks have standard sets of inputs and
192 parameter, `target_node_names` is typically null at inference time. The last
207 needed for both training and inference.
263 3. [Optional] Build inference graph I.
/external/tensorflow/tensorflow/contrib/quantize/
DREADME.md4 for both training and inference. There are two aspects to this:
6 * Operator fusion at inference time are accurately modeled at training time.
7 * Quantization effects at inference are modeled at training time.
9 For efficient inference, TensorFlow combines batch normalization with the preceding
24 converted to a fixed point inference model with little effort, eliminating the
/external/tensorflow/tensorflow/lite/toco/
Dtoco_flags.proto26 // Tensorflow's mobile inference model.
60 // inference. For such image models, the uint8 input is quantized, i.e.
96 // to estimate the performance of quantized inference, without caring about
121 // transformations, in order to ensure that quantized inference has the
127 // transformations that are necessary in order to generate inference
131 // at the cost of no longer faithfully matching inference and training
/external/tensorflow/tensorflow/core/protobuf/tpu/
Dtpu_embedding_configuration.proto25 // Mode. Should the embedding layer program be run for inference (just forward
39 // Number of TPU hosts used for inference/training.
42 // Number of TensorCore used for inference/training.
/external/tensorflow/tensorflow/lite/tools/benchmark/android/
DREADME.md62 adb logcat | grep "Average inference"
64 ... tflite : Average inference timings in us: Warmup: 91471, Init: 4108, Inference: 80660.1
/external/apache-commons-math/src/main/java/org/apache/commons/math/stat/inference/
DUnknownDistributionChiSquareTest.java17 package org.apache.commons.math.stat.inference;
DOneWayAnova.java17 package org.apache.commons.math.stat.inference;
DChiSquareTest.java17 package org.apache.commons.math.stat.inference;
/external/tensorflow/tensorflow/lite/tools/optimize/g3doc/
Dquantize_weights.md37 latency for "hybrid" kernels. In this mode the inference type is still FLOAT
42 float32 during inference to allow original float32 kernels to run. Since we
/external/tensorflow/tensorflow/lite/g3doc/models/image_classification/
Dandroid.md89 The mobile application code that pre-processes the images and runs inference is
152 ### Run inference
154 The output of the inference is stored in a byte array `labelprob.` We
155 pre-allocate the memory for the output buffer. Then, we run inference on the
/external/tensorflow/tensorflow/examples/multibox_detector/
DBUILD2 # TensorFlow C++ inference example for labeling images.
/external/tensorflow/tensorflow/lite/
DREADME.md4 devices. It enables low-latency inference of on-device machine learning models

12345678