/external/tensorflow/tensorflow/python/tpu/ |
D | async_checkpoint_test.py | 139 checkpoints = file_io.get_matching_files( 141 checkpoint_count = len(checkpoints) 142 logging.info('Found %d checkpoints: %s', checkpoint_count, checkpoints) 186 checkpoints = file_io.get_matching_files( 188 checkpoint_count = len(checkpoints) 189 logging.info('Found %d checkpoints: %s', checkpoint_count, checkpoints)
|
/external/tensorflow/tensorflow/core/protobuf/ |
D | saver.proto | 23 // Maximum number of checkpoints to keep. If 0, no checkpoints are deleted. 30 // "max_to_keep" checkpoints are kept; if specified, in addition to keeping 31 // the last "max_to_keep" checkpoints, an additional checkpoint will be kept
|
D | trackable_object_graph.proto | 28 // name-based loading of checkpoints which were saved using an 34 // Whether checkpoints should be considered as matching even without this
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_MergeV2Checkpoints.pbtxt | 6 prefixes of V2 checkpoints to merge. 22 summary: "V2 format specific: merges the metadata files of sharded checkpoints. The" 27 Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.
|
D | api_def_GenerateVocabRemapping.pbtxt | 73 checkpoints. Note that the partitioning logic relies on contiguous vocabularies
|
/external/exoplayer/tree/library/core/src/main/java/com/google/android/exoplayer2/trackselection/ |
D | AdaptiveTrackSelection.java | 666 long[][][] checkpoints = new long[logBitrates.length][checkpointCount][2]; in getAllocationCheckpoints() local 668 setCheckpointValues(checkpoints, /* checkpointIndex= */ 1, trackBitrates, currentSelection); in getAllocationCheckpoints() 683 setCheckpointValues(checkpoints, checkpointIndex, trackBitrates, currentSelection); in getAllocationCheckpoints() 685 for (long[][] points : checkpoints) { in getAllocationCheckpoints() 689 return checkpoints; in getAllocationCheckpoints() 746 long[][][] checkpoints, int checkpointIndex, long[][] trackBitrates, int[] selectedTracks) { in setCheckpointValues() argument 748 for (int i = 0; i < checkpoints.length; i++) { in setCheckpointValues() 749 checkpoints[i][checkpointIndex][1] = trackBitrates[i][selectedTracks[i]]; in setCheckpointValues() 750 totalBitrate += checkpoints[i][checkpointIndex][1]; in setCheckpointValues() 752 for (long[][] points : checkpoints) { in setCheckpointValues()
|
/external/tensorflow/tensorflow/python/training/ |
D | checkpoint_management_test.py | 359 manager.checkpoints) 365 manager.checkpoints) 370 manager.checkpoints) 377 manager.checkpoints) 404 self.assertEqual([first_name, second_name], first_manager.checkpoints) 411 self.assertEqual([first_name, second_name], second_manager.checkpoints) 420 second_manager.checkpoints) 430 second_manager.checkpoints) 436 second_manager.checkpoints) 456 third_manager.checkpoints) [all …]
|
D | checkpoint_state.proto | 11 // Paths to all not-yet-deleted model checkpoints, sorted from oldest to
|
D | checkpoint_management.py | 694 def checkpoints(self): member in CheckpointManager
|
/external/tensorflow/tensorflow/core/util/tensor_bundle/testdata/old_string_tensors/ |
D | README | 3 compatibility between the new code and old checkpoints.
|
/external/tensorflow/tensorflow/lite/g3doc/r1/convert/ |
D | python_api.md | 16 GraphDef from a file. If you have checkpoints, then first convert it to a 17 Frozen GraphDef file and then use this API as shown [here](#checkpoints). 151 #### Convert checkpoints <a name="checkpoints"></a> 153 1. Convert checkpoints to a Frozen GraphDef as follows
|
/external/tensorflow/tensorflow/tools/api/golden/v2/ |
D | tensorflow.train.-checkpoint-manager.pbtxt | 14 name: "checkpoints"
|
/external/tensorflow/tensorflow/tools/api/golden/v1/ |
D | tensorflow.train.-checkpoint-manager.pbtxt | 14 name: "checkpoints"
|
/external/tensorflow/tensorflow/core/util/ |
D | saved_tensor_slice.proto | 2 // ops checkpoints and the V3 checkpoints in dist_belief.
|
/external/tensorflow/tensorflow/python/training/saving/ |
D | BUILD | 2 # Low-level utilities for reading and writing checkpoints.
|
/external/tensorflow/tensorflow/security/advisory/ |
D | tfsa-2018-004.md | 26 If users are running TensorFlow on untrusted meta checkpoints, such as those
|
D | tfsa-2018-005.md | 29 If users are loading untrusted checkpoints in TensorFlow, we encourage users to
|
D | tfsa-2020-001.md | 19 Similar effects can be obtained by manipulating saved models and checkpoints
|
/external/tensorflow/tensorflow/python/keras/mixed_precision/testdata/ |
D | BUILD | 2 # Contains checkpoints and SavedModels for testing purposes.
|
/external/tensorflow/tensorflow/python/distribute/ |
D | README.md | 13 and checkpoints.
|
/external/tensorflow/ |
D | SECURITY.md | 15 The model's parameters are often stored separately in **checkpoints**. 52 checkpoints can trigger unsafe behavior. For example, consider a graph that 59 to provide checkpoints to a model you run on their behalf (e.g., in order to
|
/external/tensorflow/tensorflow/python/training/tracking/ |
D | BUILD | 2 # Utilities for reading and writing object-based checkpoints.
|
/external/tensorflow/tensorflow/lite/g3doc/convert/ |
D | index.md | 59 Frozen GraphDef from a file. If you have checkpoints, then first convert 61 ….com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/r1/convert/python_api.md#checkpoints).
|
/external/tensorflow/tensorflow/python/saved_model/ |
D | README.md | 18 * [Training checkpoints](https://www.tensorflow.org/guide/checkpoint)
|
/external/tensorflow/tensorflow/lite/micro/examples/micro_speech/train/ |
D | train_micro_speech_model.ipynb | 138 "TRAIN_DIR = 'train/' # for training checkpoints and other files.\n", 302 …ur or two training the model from scratch, you can download pretrained checkpoints by uncommenting…
|