Searched refs:use_gradient_accumulation (Results 1 – 13 of 13) sorted by relevance
/external/tensorflow/tensorflow/python/tpu/ |
D | tpu_embedding_v2_utils.py | 56 use_gradient_accumulation: bool, 64 self.use_gradient_accumulation = use_gradient_accumulation 67 if not use_gradient_accumulation and clipvalue is not None: 108 if self.use_gradient_accumulation: 244 use_gradient_accumulation = clipvalue is not None 247 learning_rate, use_gradient_accumulation, clip_weight_min, 320 use_gradient_accumulation: bool = True, 354 learning_rate, use_gradient_accumulation, clip_weight_min, 439 use_gradient_accumulation: bool = True, 486 learning_rate, use_gradient_accumulation, clip_weight_min, [all …]
|
D | tpu_embedding.py | 373 use_gradient_accumulation: bool, 382 self.use_gradient_accumulation = use_gradient_accumulation 391 if not use_gradient_accumulation and (clip_gradient_min is not None or 421 use_gradient_accumulation: bool = True, 450 use_gradient_accumulation=use_gradient_accumulation, 479 use_gradient_accumulation: bool = True, 512 use_gradient_accumulation=use_gradient_accumulation, 564 use_gradient_accumulation: bool = True, 601 use_gradient_accumulation=use_gradient_accumulation, 616 if not use_gradient_accumulation and not lazy_adam: [all …]
|
D | tpu_embedding_v2_utils_test.py | 35 optimizer(use_gradient_accumulation=False, clipvalue=0.) 37 optimizer(use_gradient_accumulation=False, clipvalue=(None, 1.))
|
/external/tensorflow/tensorflow/tools/api/golden/v2/ |
D | tensorflow.tpu.experimental.embedding.-adagrad.pbtxt | 8 …=[\'self\', \'learning_rate\', \'initial_accumulator_value\', \'use_gradient_accumulation\', \'cli…
|
D | tensorflow.tpu.experimental.embedding.-adam.pbtxt | 8 … \'beta_2\', \'epsilon\', \'lazy_adam\', \'sum_inside_sqrt\', \'use_gradient_accumulation\', \'cli…
|
/external/tensorflow/tensorflow/tools/api/golden/v1/ |
D | tensorflow.tpu.experimental.embedding.-adagrad.pbtxt | 8 …=[\'self\', \'learning_rate\', \'initial_accumulator_value\', \'use_gradient_accumulation\', \'cli…
|
D | tensorflow.tpu.experimental.-adagrad-parameters.pbtxt | 8 … "args=[\'self\', \'learning_rate\', \'initial_accumulator\', \'use_gradient_accumulation\', \'cli…
|
D | tensorflow.tpu.experimental.-adam-parameters.pbtxt | 8 …, \'beta2\', \'epsilon\', \'lazy_adam\', \'sum_inside_sqrt\', \'use_gradient_accumulation\', \'cli…
|
D | tensorflow.tpu.experimental.embedding.-adam.pbtxt | 8 … \'beta_2\', \'epsilon\', \'lazy_adam\', \'sum_inside_sqrt\', \'use_gradient_accumulation\', \'cli…
|
D | tensorflow.tpu.experimental.-ftrl-parameters.pbtxt | 8 …l1_regularization_strength\', \'l2_regularization_strength\', \'use_gradient_accumulation\', \'cli…
|
/external/tensorflow/tensorflow/core/tpu/ |
D | tpu_embedding_optimization_parameters_utils.h | 63 const OptimizationParameters ¶ms, bool use_gradient_accumulation,
|
D | tpu_embedding_optimization_parameters_utils.cc | 206 const OptimizationParameters& params, bool use_gradient_accumulation, in GetOptimizationAlgorithmStateVariables() argument 319 if (use_gradient_accumulation) { in GetOptimizationAlgorithmStateVariables()
|
/external/tensorflow/tensorflow/core/protobuf/tpu/ |
D | optimization_parameters.proto | 430 reserved 15; // Old use_gradient_accumulation.
|