Home
last modified time | relevance | path

Searched refs:LSTM (Results 1 – 25 of 95) sorted by relevance

1234

/external/tensorflow/tensorflow/lite/g3doc/convert/
Drnn.md6 fused LSTM operations. Fused operations exist to maximize the performance of
13 1. Provide **native support for standard TensorFlow RNN APIs** like Keras LSTM.
39 # Keras LSTM layers.
62 Keras LSTM to TensorFlow Lite
70 ### Keras LSTM conversion (recommended)
72 We support out-of-the-box conversion of Keras LSTM to TensorFlow Lite. For
74 [Keras LSTM interface](https://github.com/tensorflow/tensorflow/blob/35a3ab91b42503776f428bda574b74…
78 Also important is to highlight the TensorFlow Lite’s LSTM contract with respect
90 #### Keras LSTM Variants
94 Users may choose time-major or no time-major. Keras LSTM adds a time-major
[all …]
Doperation_fusion.md47 LSTM, convolution (conv2d, bias add, relu), fully connected (matmul, bias add,
48 relu) and more. In TensorFlow Lite, LSTM quantization is currently only
49 implemented in the fused LSTM operations.
62 TensorFlow Lite operation. For example, there are many LSTM implementations
65 same fused LSTM operation in TensorFlow Lite.
98 [Keras LSTM](https://github.com/tensorflow/tensorflow/blob/1099faa8d6a941ef44d09ed8c372ff0ffda94112…
/external/tensorflow/tensorflow/python/keras/layers/
Dlstm_v2_test.py75 layer = rnn.LSTM(
86 layer = rnn.LSTM(1, activation=nn.tanh)
89 layer = rnn.LSTM(1, recurrent_activation=nn.sigmoid)
102 layer = rnn.LSTM(units, return_sequences=True)
112 layer = rnn.LSTM(units, input_shape=(None, embedding_dim))
125 model.add(rnn.LSTM(10, return_sequences=True, unroll=False))
126 model.add(rnn.LSTM(5, return_sequences=True, unroll=False))
133 layer_class = rnn.LSTM
149 layer = rnn.LSTM(units)
183 layer = rnn.LSTM(units)
[all …]
Dlstm_test.py44 keras.layers.LSTM,
58 keras.layers.LSTM,
75 layer = keras.layers.LSTM(units, return_sequences=True)
85 layer = keras.layers.LSTM(units, input_shape=(None, embedding_dim))
103 keras.layers.LSTM,
110 layer = keras.layers.LSTM(2, recurrent_dropout=0.1, implementation=2)
121 keras.layers.LSTM,
128 layer_class = keras.layers.LSTM
150 layer_class = keras.layers.LSTM
180 layer_class = keras.layers.LSTM
[all …]
Drecurrent_v2_test.py41 @parameterized.parameters([rnn_v2.LSTM, rnn_v2.GRU])
68 @parameterized.parameters([rnn_v2.LSTM, rnn_v2.GRU])
95 @parameterized.parameters([rnn_v2.LSTM, rnn_v2.GRU])
117 @parameterized.parameters([rnn_v2.LSTM, rnn_v2.GRU])
Dserialization_test.py140 @parameterized.parameters([rnn_v1.LSTM, rnn_v2.LSTM])
149 self.assertIsInstance(new_layer, rnn_v2.LSTM)
151 self.assertIsInstance(new_layer, rnn_v1.LSTM)
152 self.assertNotIsInstance(new_layer, rnn_v2.LSTM)
D__init__.py226 from tensorflow.python.keras.layers.recurrent_v2 import LSTM
230 from tensorflow.python.keras.layers.recurrent import LSTM as LSTMV1
234 LSTMV2 = LSTM
239 from tensorflow.python.keras.layers.recurrent import LSTM
243 from tensorflow.python.keras.layers.recurrent_v2 import LSTM as LSTMV2
247 LSTMV1 = LSTM
Dwrappers_test.py382 rnn_layer = keras.layers.LSTM(4, return_sequences=True, stateful=True)
410 layer=[keras.layers.LSTM,
669 rnn = keras.layers.LSTM
765 rnn = keras.layers.LSTM
791 rnn = keras.layers.LSTM
820 rnn = keras.layers.LSTM
975 @parameterized.parameters([keras.layers.LSTM, keras.layers.GRU])
978 num_state = 4 if rnn == keras.layers.LSTM else 2
1042 rnn = keras.layers.LSTM
1067 @parameterized.parameters([keras.layers.LSTM, keras.layers.GRU])
[all …]
/external/tensorflow/tensorflow/lite/examples/experimental_new_converter/
DKeras_LSTM_fusion_Codelab.ipynb6 "name": "Keras LSTM fusion Codelab.ipynb",
25 …"This CodeLab demonstrates how to build a fused TFLite LSTM model for MNIST recognition using Kera…
27LSTM [CodeLab](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/exper…
64 "# Step 1: Build the MNIST LSTM model."
92 " tf.keras.layers.LSTM(20, time_major=False, return_sequences=True),\n",
237 "Let's check the model, you can see the LSTM will be in it's fused format.\n",
239 …"![Fused LSTM](https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/exam…
Dkeras_lstm.ipynb12 …"This CodeLab demonstrates how to build a LSTM model for MNIST recognition using Keras, and how to…
53 "## Step 1: Build the MNIST LSTM model.\n",
84 " tf.keras.layers.LSTM(20),\n",
/external/tensorflow/tensorflow/security/advisory/
Dtfsa-2020-033.md1 ## TFSA-2020-033: CHECK-fail in LSTM with zero-length input
7 Running an LSTM/GRU model where the LSTM/GRU layer receives an input with
/external/tensorflow/tensorflow/python/keras/tests/
Dmodel_architectures.py60 x = keras.layers.LSTM(4, return_sequences=True)(inputs)
61 x = keras.layers.LSTM(3, return_sequences=True)(x)
62 x = keras.layers.LSTM(2, return_sequences=False)(x)
74 body_features = keras.layers.LSTM(5)(x)
105 encoder = keras.layers.LSTM(latent_dim, return_state=True)
109 decoder_lstm = keras.layers.LSTM(
126 lstm_out = keras.layers.LSTM(3)(x)
/external/tensorflow/tensorflow/python/keras/benchmarks/
Doptimizer_benchmarks_test.py34 tf.keras.layers.LSTM(64, return_sequences=True))(
36 x = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))(x)
/external/tensorflow/tensorflow/python/keras/benchmarks/layer_benchmarks/
Dlayer_benchmarks_test.py41 return {"layer": tf.keras.layers.LSTM(1)}
117 ("LSTM_small_shape", tf.keras.layers.LSTM,
119 ("LSTM_normal_shape", tf.keras.layers.LSTM,
283 tf.keras.layers.LSTM, tf.keras.layers.Bidirectional,
/external/tensorflow/tensorflow/python/keras/utils/
Dvis_utils_test.py48 lstm = keras.layers.LSTM(6, return_sequences=True, name='lstm')
52 keras.layers.LSTM(16, return_sequences=True, name='bilstm'))
/external/tensorflow/tensorflow/python/keras/distribute/
Dkeras_rnn_model_correctness_test.py103 return rnn_v2.LSTM
105 return rnn_v1.LSTM
/external/tensorflow/tensorflow/lite/g3doc/guide/
Droadmap.md22 * **LSTM / RNN support**
23 * Full LSTM and RNN conversion support, including support in Keras
65 * Better op coverage, including LSTM
/external/tensorflow/tensorflow/python/keras/benchmarks/keras_examples_benchmarks/
Dbidirectional_lstm_benchmark_test.py42 tf.keras.layers.LSTM(64, return_sequences=True))(
44 x = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))(x)
Dmnist_hierarchical_rnn_benchmark_test.py45 tf.keras.layers.LSTM(self.row_hidden))(
47 encoded_cols = tf.keras.layers.LSTM(self.col_hidden)(encoded_rows)
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_CudnnRNN.pbtxt21 input_c: For LSTM, a 3-D tensor with the shape of
30 output_c: The same shape as input_c for LSTM. An empty tensor for other models.
Dapi_def_LSTMBlockCell.pbtxt7 The input to the LSTM cell, shape (batch_size, num_inputs).
112 summary: "Computes the LSTM cell forward propagation for 1 time step."
Dapi_def_LSTMBlockCellGrad.pbtxt7 The input to the LSTM cell, shape (batch_size, num_inputs).
136 summary: "Computes the LSTM cell backward propagation for 1 timestep."
Dapi_def_CudnnRNNV2.pbtxt22 input_c: For LSTM, a 3-D tensor with the shape of
31 output_c: The same shape as input_c for LSTM. An empty tensor for other models.
Dapi_def_CudnnRNNBackprop.pbtxt20 input_c: For LSTM, a 3-D tensor with the shape of
29 output_c: The same shape as input_c for LSTM. An empty tensor for other models.
/external/tensorflow/tensorflow/lite/delegates/gpu/cl/kernels/
Dlstm_test.cc31 TEST_F(OpenCLOperationTest, LSTM) { in TEST_F() argument

1234