Searched refs:replicated (Results 1 – 25 of 61) sorted by relevance
123
/external/tensorflow/tensorflow/docs_src/performance/ |
D | performance_models.md | 182 * `replicated` places an identical copy of each training variable on each 256 This mode can be used in the script by passing `--variable_update=replicated`. 260 The replicated method for variables can be extended to distributed training. One 261 way to do this like the replicated mode: aggregate the gradients fully across 267 stored on the parameter servers. As with the replicated mode, training can start 352 ,`replicated`, `distributed_replicated`, `independent` 368 --batch_size=64 --model=vgg16 --variable_update=replicated --use_nccl=True 378 --batch_size=64 --model=resnet50 --variable_update=replicated --use_nccl=False
|
D | benchmarks.md | 107 AlexNet | replicated (with NCCL) | n/a 108 VGG16 | replicated (with NCCL) | n/a 270 ResNet-50 | replicated (without NCCL) | gpu 271 ResNet-152 | replicated (without NCCL) | gpu
|
/external/libavc/common/arm/ |
D | ih264_padding_neon.s | 51 @* The top row of a 2d array is replicated for pad_size times at the top 120 @* The left column of a 2d array is replicated for pad_size times at the left 256 @* The left column of a 2d array is replicated for pad_size times at the left 384 @* The right column of a 2d array is replicated for pad_size times at the right 530 @* The right column of a 2d array is replicated for pad_size times at the right
|
D | ih264_inter_pred_chroma_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_inter_pred_filters_luma_horz_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_inter_pred_luma_horz_qpel_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_intra_pred_luma_16x16_a9q.s | 44 @* All the functions here are replicated from ih264_intra_pred_filters.c
|
D | ih264_inter_pred_filters_luma_vert_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_inter_pred_luma_bilinear_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_inter_pred_luma_horz_qpel_vert_qpel_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
D | ih264_intra_pred_chroma_a9q.s | 44 @* All the functions here are replicated from ih264_chroma_intra_pred_filters.c
|
D | ih264_inter_pred_luma_vert_qpel_a9q.s | 41 @* All the functions here are replicated from ih264_inter_pred_filters.c
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_Tile.pbtxt | 19 and the values of `input` are replicated `multiples[i]` times along the 'i'th
|
/external/llvm/test/CodeGen/SystemZ/ |
D | vec-const-13.ll | 139 ; ...and again with the lower bits of the replicated constant. 161 ; ...and again with the lower bits of the replicated constant.
|
/external/javaparser/javaparser-testing/src/test/resources/com/github/javaparser/bdd/ |
D | visitor_scenarios.story | 1 Scenario: A class that is replicated using a CloneVisitor should be equal to the source
|
/external/tensorflow/tensorflow/contrib/android/cmake/ |
D | CMakeLists.txt | 36 # Change to compile flags should be replicated into bazel build file
|
/external/tensorflow/tensorflow/compiler/xla/tools/parser/ |
D | hlo_parser_test.cc | 192 %greater-than = pred[4]{0} greater-than(f32[4]{0} %v1, f32[4]{0} %v2), sharding={replicated} 231 …[] %v1, f32[3]{0} %v2, f32[2,3]{1,0} %v3), sharding={{replicated}, {maximal device=0}, {replicated…
|
D | hlo_lexer.cc | 218 KEYWORD(replicated); in LexIdentifier()
|
D | hlo_parser.cc | 1128 bool replicated = false; in ParseSingleSharding() local 1139 replicated = true; in ParseSingleSharding() 1192 if (replicated) { in ParseSingleSharding()
|
/external/tensorflow/tensorflow/contrib/tpu/ |
D | tpu_estimator.md | 138 creates one single graph that is replicated across all the cores in the Cloud 166 - The `model_fn` models the computation which will be replicated and distributed
|
/external/tensorflow/tensorflow/compiler/xla/ |
D | xla_data.proto | 290 // Handle given to a user that represents a replicated virtual device. Each 291 // replicated device represents N physical devices for execution where N is the 309 // represents the device ids assigned to a set of replicated computations. 933 // This sharding is replicated across all devices (implies maximal,
|
/external/skia/site/dev/contrib/ |
D | simd.md | 115 …__m128i` and `Alpha` as an `__m128i` with each pixel's alpha component replicated four times. `Sk… 119 …here we store `Alpha` somewhat inefficiently with each alpha component replicated 4 times, but SSE…
|
/external/skqp/site/dev/contrib/ |
D | simd.md | 115 …__m128i` and `Alpha` as an `__m128i` with each pixel's alpha component replicated four times. `Sk… 119 …here we store `Alpha` somewhat inefficiently with each alpha component replicated 4 times, but SSE…
|
/external/swiftshader/third_party/LLVM/include/llvm/ |
D | IntrinsicsARM.td | 274 // shifts, where the constant is replicated. For consistency with VSHL (and
|
/external/libyuv/files/docs/ |
D | filtering.md | 50 …ng half a pixel of source for each pixel of destination. Each pixel is replicated by the scale fac…
|
123