Home
last modified time | relevance | path

Searched refs:gradient (Results 1 – 25 of 401) sorted by relevance

12345678910>>...17

/external/ImageMagick/MagickCore/
Dpaint.c415 *gradient; in GradientImage() local
430 gradient=(&draw_info->gradient); in GradientImage()
431 gradient->type=type; in GradientImage()
432 gradient->bounding_box.width=image->columns; in GradientImage()
433 gradient->bounding_box.height=image->rows; in GradientImage()
436 (void) ParseAbsoluteGeometry(artifact,&gradient->bounding_box); in GradientImage()
437 gradient->gradient_vector.x2=(double) image->columns-1; in GradientImage()
438 gradient->gradient_vector.y2=(double) image->rows-1; in GradientImage()
451 gradient->gradient_vector.x1=(double) image->columns-1; in GradientImage()
452 gradient->gradient_vector.y1=(double) image->rows-1; in GradientImage()
[all …]
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_TensorArrayGradV3.pbtxt21 The gradient source string, used to decide which gradient TensorArray
27 If the given TensorArray gradient already exists, returns a reference to it.
33 The handle flow_in forces the execution of the gradient lookup to occur
36 may resize the object. The gradient TensorArray is statically sized based
39 As a result, the flow is used to ensure that the call to generate the gradient
42 In the case of dynamically sized TensorArrays, gradient computation should
49 TensorArray gradient calls use an accumulator TensorArray object. If
51 gradient nodes may accidentally flow through the same accumulator TensorArray.
52 This double counts and generally breaks the TensorArray gradient flow.
54 The solution is to identify which gradient call this particular
[all …]
Dapi_def_FusedBatchNormGrad.pbtxt6 A 4D Tensor for the gradient with respect to y.
25 mean to be reused in gradient computation. When is_training is
27 1st and 2nd order gradient computation.
35 gradient computation. When is_training is False, a 1D Tensor
37 order gradient computation.
43 A 4D Tensor for the gradient with respect to x.
49 A 1D Tensor for the gradient with respect to scale.
55 A 1D Tensor for the gradient with respect to offset.
Dapi_def_FusedBatchNormGradV2.pbtxt6 A 4D Tensor for the gradient with respect to y.
25 mean to be reused in gradient computation. When is_training is
27 1st and 2nd order gradient computation.
35 gradient computation. When is_training is False, a 1D Tensor
37 order gradient computation.
43 A 4D Tensor for the gradient with respect to x.
49 A 1D Tensor for the gradient with respect to scale.
55 A 1D Tensor for the gradient with respect to offset.
Dapi_def_SparseAccumulatorApplyGradient.pbtxt12 The local_step value at which the sparse gradient was computed.
18 Indices of the sparse gradient to be accumulated. Must be a
25 Values are the non-zero slices of the gradient, and must have
33 Shape of the sparse gradient to be accumulated.
50 summary: "Applies a sparse gradient to a given accumulator."
Dapi_def_StridedSliceGrad.pbtxt3 summary: "Returns the gradient of `StridedSlice`."
6 `shape`, its gradient will have the same shape (which is passed here
7 as `shape`). The gradient will be zero in any element that the slice
11 `dy` is the input gradient to be propagated and `shape` is the
Dapi_def_SparseAddGrad.pbtxt6 1-D with shape `[nnz(sum)]`. The gradient with respect to
32 1-D with shape `[nnz(A)]`. The gradient with respect to the
39 1-D with shape `[nnz(B)]`. The gradient with respect to the
43 summary: "The gradient operator for the SparseAdd op."
46 as `SparseTensor` objects. This op takes in the upstream gradient w.r.t.
Dapi_def_AccumulatorApplyGradient.pbtxt12 The local_step value at which the gradient was computed.
16 name: "gradient"
18 A tensor of the gradient to be accumulated.
28 summary: "Applies a gradient to a given accumulator."
Dapi_def_PreventGradient.pbtxt22 summary: "An identity op that triggers an error if a gradient is requested."
26 When building ops to compute gradients, the TensorFlow gradient system
27 will return an error when trying to lookup the gradient of this op,
28 because no gradient must ever be registered for this function. This
Dapi_def_SparseSliceGrad.pbtxt6 1-D. The gradient with respect to
31 1-D. The gradient with respect to the non-empty values of input `SparseTensor`.
34 summary: "The gradient operator for the SparseSlice op."
36 This op takes in the upstream gradient w.r.t. non-empty values of
Dapi_def_UnbatchGrad.pbtxt9 original_input: The input to the Unbatch operation this is the gradient of.
10 batch_index: The batch_index given to the Unbatch operation this is the gradient
12 grad: The downstream gradient.
14 batched_grad: The return value, either an empty tensor or the batched gradient.
Dapi_def_TensorArrayGradWithShape.pbtxt21 An int32 vector representing a shape. Elements in the gradient accumulator will
29 The gradient source string, used to decide which gradient TensorArray
36 expanded shape compared to the input TensorArray whose gradient is being
/external/libxcam/cl_kernel/
Dkernel_3d_denoise_slm.cl69 float4 gradient = (float4)(0.0f, 0.0f, 0.0f, 0.0f);
96gradient = (float4)(ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)].s2,
100 gradient = (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, local_id_x + j)]) +
101 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)]) +
102 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, 2 * REF_BLOCK_WIDTH + local_id_x + j)]) +
103 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, 3 * REF_BLOCK_WIDTH + local_id_x + j)]);
104 gradient.s0 = (gradient.s0 + gradient.s1 + gradient.s2 + gradient.s3) / 15.0f;
105 gain = (gradient.s0 < threshold) ? gain : 2.0f * gain;
142gradient = (float4)(ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)].s2,
146 gradient = (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, local_id_x + j)]) +
[all …]
/external/tensorflow/tensorflow/compiler/tf2xla/
Dxla_resource.cc80 for (const string& gradient : tensor_array_gradients) { in XlaResource() local
81 tensor_array_gradients_[gradient].reset(new XlaResource( in XlaResource()
167 std::unique_ptr<XlaResource>& gradient = tensor_array_gradients_[source]; in GetOrCreateTensorArrayGradient() local
168 if (!gradient) { in GetOrCreateTensorArrayGradient()
174 gradient.reset( in GetOrCreateTensorArrayGradient()
181 *gradient_out = gradient.get(); in GetOrCreateTensorArrayGradient()
192 for (const auto& gradient : tensor_array_gradients_) { in Pack() local
193 elems.push_back(gradient.second->value_); in Pack()
218 XlaResource* gradient; in SetFromPack() local
220 GetOrCreateTensorArrayGradient(source, builder, &gradient)); in SetFromPack()
[all …]
/external/tensorflow/tensorflow/python/eager/
Dbackprop_test.py107 self.assertTrue(t.gradient(result, v) is not None)
131 dx, dy = t.gradient([xx, yy], [x, y])
147 t.gradient(y, [x])
155 dx, = t.gradient([loss, x], [x], output_gradients=[1.0, 2.0])
255 self.assertEqual(t.gradient(y, x).numpy(), 1.0)
262 self.assertEqual(t.gradient(y, x).numpy(), 1.0)
269 self.assertEqual(t.gradient([x, y], x).numpy(), 5.0)
276 self.assertEqual(t.gradient([y, y], x).numpy(), 2.0)
285 self.assertAllEqual(t.gradient([x, y, z], [x, y]), [1.0, 11.0])
294 grads = t.gradient(s, x)
[all …]
Dfunction_gradients_test.py125 self.assertAllEqual(self.evaluate(t.gradient(y, x)), 2.0)
140 self.assertAllEqual(self.evaluate(t.gradient(y, x)), 4.0)
254 gradient = grad_fn()
257 self.assertEqual(len(gradient), len(defun_gradient))
259 gradient = gradient[0][0]
261 self.assertAllEqual(gradient.values, defun_gradient.values)
262 self.assertAllEqual(gradient.indices, defun_gradient.indices)
263 self.assertAllEqual(gradient.dense_shape, defun_gradient.dense_shape)
285 g = tp.gradient(r, x)
305 grad = tp.gradient(result, x)
[all …]
/external/swiftshader/src/Renderer/
DSetupProcessor.cpp111 state.gradient[interpolant][component].attribute = Unused; in update()
112 state.gradient[interpolant][component].flat = false; in update()
113 state.gradient[interpolant][component].wrap = false; in update()
154 state.gradient[interpolant][component].attribute = input; in update()
155 state.gradient[interpolant][component].flat = flat; in update()
173 state.gradient[interpolant][component].attribute = T0 + semantic.index; in update()
174 state.gradient[interpolant][component].flat = semantic.flat || (point && !sprite); in update()
177 state.gradient[interpolant][component].attribute = C0 + semantic.index; in update()
178 state.gradient[interpolant][component].flat = semantic.flat || flatShading; in update()
/external/skia/src/gpu/gradients/
DGrTextureGradientColorizer.h18 static std::unique_ptr<GrFragmentProcessor> Make(sk_sp<GrTextureProxy> gradient) { in Make() argument
19 return std::unique_ptr<GrFragmentProcessor>(new GrTextureGradientColorizer(gradient)); in Make()
26 GrTextureGradientColorizer(sk_sp<GrTextureProxy> gradient) in GrTextureGradientColorizer() argument
28 , fGradient(std::move(gradient), GrSamplerState::ClampBilerp()) { in GrTextureGradientColorizer()
/external/skqp/src/gpu/gradients/
DGrTextureGradientColorizer.h18 static std::unique_ptr<GrFragmentProcessor> Make(sk_sp<GrTextureProxy> gradient) { in Make() argument
19 return std::unique_ptr<GrFragmentProcessor>(new GrTextureGradientColorizer(gradient)); in Make()
26 GrTextureGradientColorizer(sk_sp<GrTextureProxy> gradient) in GrTextureGradientColorizer() argument
28 , fGradient(std::move(gradient), GrSamplerState::ClampBilerp()) { in GrTextureGradientColorizer()
/external/tensorflow/tensorflow/contrib/layers/python/layers/
Doptimizers.py274 for gradient, variable in gradients:
275 if isinstance(gradient, ops.IndexedSlices):
276 grad_values = gradient.values
278 grad_values = gradient
416 for gradient in gradients:
417 if gradient is None:
420 if isinstance(gradient, ops.IndexedSlices):
421 gradient_shape = gradient.dense_shape
423 gradient_shape = gradient.get_shape()
425 noisy_gradients.append(gradient + noise)
/external/tensorflow/tensorflow/core/kernels/
Drelu_op_gpu.cu.cc37 __global__ void ReluGradHalfKernel(const Eigen::half* gradient, in ReluGradHalfKernel() argument
47 half2 gradient_h2 = reinterpret_cast<const half2*>(gradient)[index]; in ReluGradHalfKernel()
76 Eigen::half grad_h = gradient[count - 1]; in ReluGradHalfKernel()
97 typename TTypes<Eigen::half>::ConstTensor gradient, in operator ()()
103 int32 count = gradient.size(); in operator ()()
110 d.stream()>>>(gradient.data(), feature.data(), in operator ()()
Dfake_quant_ops.cc118 void Operate(OpKernelContext* context, const Tensor& gradient, in Operate() argument
120 OperateNoTemplate(context, gradient, input, output); in Operate()
123 void OperateNoTemplate(OpKernelContext* context, const Tensor& gradient, in OperateNoTemplate() argument
125 OP_REQUIRES(context, input.IsSameSize(gradient), in OperateNoTemplate()
128 functor(context->eigen_device<Device>(), gradient.flat<float>(), in OperateNoTemplate()
230 const Tensor& gradient = context->input(0); in Compute() local
232 OP_REQUIRES(context, input.IsSameSize(gradient), in Compute()
251 functor(context->eigen_device<Device>(), gradient.flat<float>(), in Compute()
367 const Tensor& gradient = context->input(0); in Compute() local
369 OP_REQUIRES(context, input.IsSameSize(gradient), in Compute()
[all …]
/external/apache-commons-math/src/main/java/org/apache/commons/math/optimization/fitting/
DPolynomialFitter.java87 public double[] gradient(double x, double[] parameters) { in gradient() method in PolynomialFitter.ParametricPolynomial
88 final double[] gradient = new double[parameters.length]; in gradient() local
91 gradient[i] = xn; in gradient()
94 return gradient; in gradient()
/external/tensorflow/tensorflow/cc/gradients/
DREADME.md10 1. Create the op gradient function in `foo_grad.cc` corresponding to the
14 2. Write the op gradient with the following naming scheme:
30 for the op's inputs and calling `RunTest` (`RunTest` uses a [gradient
32 to verify that the theoretical gradient matches the numeric gradient). For
/external/swiftshader/src/Device/
DSetupProcessor.cpp102 state.gradient[interpolant][component].attribute = Unused; in update()
103 state.gradient[interpolant][component].flat = false; in update()
104 state.gradient[interpolant][component].wrap = false; in update()
136 state.gradient[interpolant][component].attribute = input; in update()
137 state.gradient[interpolant][component].flat = flat; in update()

12345678910>>...17