Home
last modified time | relevance | path

Searched refs:replicated (Results 1 – 25 of 61) sorted by relevance

123

/external/tensorflow/tensorflow/docs_src/performance/
Dperformance_models.md182 * `replicated` places an identical copy of each training variable on each
256 This mode can be used in the script by passing `--variable_update=replicated`.
260 The replicated method for variables can be extended to distributed training. One
261 way to do this like the replicated mode: aggregate the gradients fully across
267 stored on the parameter servers. As with the replicated mode, training can start
352 ,`replicated`, `distributed_replicated`, `independent`
368 --batch_size=64 --model=vgg16 --variable_update=replicated --use_nccl=True
378 --batch_size=64 --model=resnet50 --variable_update=replicated --use_nccl=False
Dbenchmarks.md107 AlexNet | replicated (with NCCL) | n/a
108 VGG16 | replicated (with NCCL) | n/a
270 ResNet-50 | replicated (without NCCL) | gpu
271 ResNet-152 | replicated (without NCCL) | gpu
/external/libavc/common/arm/
Dih264_padding_neon.s51 @* The top row of a 2d array is replicated for pad_size times at the top
120 @* The left column of a 2d array is replicated for pad_size times at the left
256 @* The left column of a 2d array is replicated for pad_size times at the left
384 @* The right column of a 2d array is replicated for pad_size times at the right
530 @* The right column of a 2d array is replicated for pad_size times at the right
Dih264_inter_pred_chroma_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_inter_pred_filters_luma_horz_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_inter_pred_luma_horz_qpel_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_intra_pred_luma_16x16_a9q.s44 @* All the functions here are replicated from ih264_intra_pred_filters.c
Dih264_inter_pred_filters_luma_vert_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_inter_pred_luma_bilinear_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_inter_pred_luma_horz_qpel_vert_qpel_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
Dih264_intra_pred_chroma_a9q.s44 @* All the functions here are replicated from ih264_chroma_intra_pred_filters.c
Dih264_inter_pred_luma_vert_qpel_a9q.s41 @* All the functions here are replicated from ih264_inter_pred_filters.c
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_Tile.pbtxt19 and the values of `input` are replicated `multiples[i]` times along the 'i'th
/external/llvm/test/CodeGen/SystemZ/
Dvec-const-13.ll139 ; ...and again with the lower bits of the replicated constant.
161 ; ...and again with the lower bits of the replicated constant.
/external/javaparser/javaparser-testing/src/test/resources/com/github/javaparser/bdd/
Dvisitor_scenarios.story1 Scenario: A class that is replicated using a CloneVisitor should be equal to the source
/external/tensorflow/tensorflow/contrib/android/cmake/
DCMakeLists.txt36 # Change to compile flags should be replicated into bazel build file
/external/tensorflow/tensorflow/compiler/xla/tools/parser/
Dhlo_parser_test.cc192 %greater-than = pred[4]{0} greater-than(f32[4]{0} %v1, f32[4]{0} %v2), sharding={replicated}
231 …[] %v1, f32[3]{0} %v2, f32[2,3]{1,0} %v3), sharding={{replicated}, {maximal device=0}, {replicated
Dhlo_lexer.cc218 KEYWORD(replicated); in LexIdentifier()
Dhlo_parser.cc1128 bool replicated = false; in ParseSingleSharding() local
1139 replicated = true; in ParseSingleSharding()
1192 if (replicated) { in ParseSingleSharding()
/external/tensorflow/tensorflow/contrib/tpu/
Dtpu_estimator.md138 creates one single graph that is replicated across all the cores in the Cloud
166 - The `model_fn` models the computation which will be replicated and distributed
/external/tensorflow/tensorflow/compiler/xla/
Dxla_data.proto290 // Handle given to a user that represents a replicated virtual device. Each
291 // replicated device represents N physical devices for execution where N is the
309 // represents the device ids assigned to a set of replicated computations.
933 // This sharding is replicated across all devices (implies maximal,
/external/skia/site/dev/contrib/
Dsimd.md115 …__m128i` and `Alpha` as an `__m128i` with each pixel's alpha component replicated four times. `Sk…
119 …here we store `Alpha` somewhat inefficiently with each alpha component replicated 4 times, but SSE…
/external/skqp/site/dev/contrib/
Dsimd.md115 …__m128i` and `Alpha` as an `__m128i` with each pixel's alpha component replicated four times. `Sk…
119 …here we store `Alpha` somewhat inefficiently with each alpha component replicated 4 times, but SSE…
/external/swiftshader/third_party/LLVM/include/llvm/
DIntrinsicsARM.td274 // shifts, where the constant is replicated. For consistency with VSHL (and
/external/libyuv/files/docs/
Dfiltering.md50 …ng half a pixel of source for each pixel of destination. Each pixel is replicated by the scale fac…

123