/frameworks/ml/nn/tools/api/ |
D | types.spec | 91 * Supported tensor rank: 4, with "NHWC" (i.e., Num_samples, Height, Width, 121 * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. 128 * Since %{APILevel29}, generic zero-sized input tensor is supported. Zero 134 * Since %{APILevel29}, zero batches is supported for this tensor. 158 * A tensor of OEM specific values. 181 * Types prefaced with %{ANN}TENSOR_* must be used for tensor data (i.e., tensors 200 /** A tensor of 32 bit floating point values. */ 202 /** A tensor of 32 bit integer values. */ 205 * A tensor of 8 bit unsigned integers that represent real numbers. 207 * Attached to this tensor are two numbers that can be used to convert the [all …]
|
D | NeuralNetworks.t | 349 * should typically create one shared memory object that contains every constant tensor 363 * of the element type byte size, e.g., a tensor with 589 * A tensor operand type with all dimensions specified is "fully 591 * known at model construction time), a tensor operand type should be 595 * If a tensor operand's type is not fully specified, the dimensions 601 * <p>In the following situations, a tensor operand type must be fully 609 * model within a compilation. A fully specified tensor operand type 617 * not have a fully specified tensor operand type.</li> 622 * A fully specified tensor operand type must either be provided 628 * A tensor operand type of specified rank but some number of [all …]
|
/frameworks/ml/nn/runtime/test/specs/V1_3/ |
D | bidirectional_sequence_rnn_1_3.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose( 30 def reverse_batch_major(tensor, tensor_shape): argument 31 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 33 def split_tensor_in_two(tensor, tensor_shape): argument 34 tensor = np.array(tensor).reshape(tensor_shape) 35 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | bidirectional_sequence_rnn_state_output.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose([1, 0, 2 31 def reverse_batch_major(tensor, tensor_shape): argument 32 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 35 def split_tensor_in_two(tensor, tensor_shape): argument 36 tensor = np.array(tensor).reshape(tensor_shape) 37 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | unidirectional_sequence_rnn.mod.py | 42 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument 43 return np.array(tensor).reshape([num_batches, max_time, input_size
|
/frameworks/ml/nn/runtime/test/specs/V1_2/ |
D | bidirectional_sequence_rnn.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose( 30 def reverse_batch_major(tensor, tensor_shape): argument 31 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 33 def split_tensor_in_two(tensor, tensor_shape): argument 34 tensor = np.array(tensor).reshape(tensor_shape) 35 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | unidirectional_sequence_rnn.mod.py | 39 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument 40 return np.array(tensor).reshape([num_batches, max_time,
|
/frameworks/ml/nn/tools/test_generator/ |
D | spec_visualizer.py | 148 for tensor in op.ins: 150 "source": str(tensor), 153 for tensor in op.outs: 155 "target": str(tensor),
|
D | README.md | 199 … as an internal operand. Will skip if the model does not have any output tensor that is compatible… 231 …model to model inputs. Will skip if the model does not have any constant tensor, or if the model h… 233 …t as an internal operand. Will skip if the model does not have any input tensor that is compatible…
|
/frameworks/ml/nn/common/operations/ |
D | QuantizedLSTMTest.cpp | 225 Result setInputTensor(Execution* execution, int tensor, const std::vector<T>& data) { in setInputTensor() argument 226 return execution->setInput(tensor, data.data(), sizeof(T) * data.size()); in setInputTensor() 229 Result setOutputTensor(Execution* execution, int tensor, std::vector<T>* data) { in setOutputTensor() argument 230 return execution->setOutput(tensor, data->data(), sizeof(T) * data->size()); in setOutputTensor()
|
D | QLSTM.cpp | 98 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument 99 return context->getInputBuffer(tensor) != nullptr; in hasTensor() 173 for (const int tensor : requiredTensorInputs) { in prepare() local 174 NN_RET_CHECK(!context->isOmittedInput(tensor)) in prepare() 175 << "required input " << tensor << " is omitted"; in prepare()
|
D | UnidirectionalSequenceLSTM.cpp | 93 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument 94 return context->getInputBuffer(tensor) != nullptr; in hasTensor()
|
/frameworks/ml/nn/extensions/ |
D | README.md | 61 * A custom tensor type. 63 * Attached to this tensor is {@link ExampleTensorParams}. 76 * * 0: A tensor of {@link EXAMPLE_TENSOR}.
|