Lines Matching refs:Delta
45 determine a correction :math:`\Delta x` to the vector :math:`x`. For
47 the linearization :math:`F(x+\Delta x) \approx F(x) + J(x)\Delta x`,
50 .. math:: \min_{\Delta x} \frac{1}{2}\|J(x)\Delta x + F(x)\|^2
54 updating :math:`x \leftarrow x+ \Delta x` leads to an algorithm that
56 the size of the step :math:`\Delta x`. Depending on how the size of
57 the step :math:`\Delta x` is controlled, non-linear optimization
92 \arg \min_{\Delta x}& \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 \\
93 \text{such that} &\|D(x)\Delta x\|^2 \le \mu\\
94 &L \le x + \Delta x \le U.
96 3. :math:`\rho = \frac{\displaystyle \|F(x + \Delta x)\|^2 -
97 \|F(x)\|^2}{\displaystyle \|J(x)\Delta x + F(x)\|^2 -
99 4. if :math:`\rho > \epsilon` then :math:`x = x + \Delta x`.
106 :math:`\rho` measures the quality of the step :math:`\Delta x`, i.e.,
117 \arg \min_{\Delta x}& \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 \\
118 \text{such that} &\|D(x)\Delta x\|^2 \le \mu\\
119 &L \le x + \Delta x \le U.
154 .. math:: \arg\min_{\Delta x}& \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 +\lambda \|D(x)\Delta x\|^2
159 .. math:: \arg\min_{\Delta x}& \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 + \frac{1}{\mu} \|D(x)\Delta x\…
172 .. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + f(x)\|^2 .
199 .. math:: \|H(x) \Delta x + g(x)\| \leq \eta_k \|g(x)\|.
226 … \Delta x^{\text{Gauss-Newton}} &= \arg \min_{\Delta x}\frac{1}{2} \|J(x)\Delta x + f(x)\|^2.\\
227 \Delta x^{\text{Cauchy}} &= -\frac{\|g(x)\|^2}{\|J(x)g(x)\|^2}g(x).
229 Note that the vector :math:`\Delta x^{\text{Gauss-Newton}}` is the
230 solution to :eq:`linearapprox` and :math:`\Delta
233 of the gradient. Dogleg methods finds a vector :math:`\Delta x`
234 defined by :math:`\Delta x^{\text{Gauss-Newton}}` and :math:`\Delta
373 2. :math:`\Delta x = -H^{-1}(x) g(x)`
374 3. :math:`\arg \min_\mu \frac{1}{2} \| F(x + \mu \Delta x) \|^2`
375 4. :math:`x = x + \mu \Delta x`
381 different search directions :math:`\Delta x`.
384 :math:`\Delta x` is what gives this class of methods its name.
387 direction :math:`\Delta x` and the method used for one dimensional
388 optimization along :math:`\Delta x`. The choice of :math:`H(x)` is the
432 .. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + f(x)\|^2 .
440 .. math:: H \Delta x = g
457 .. math:: \Delta x^* = -R^{-1}Q^\top f
475 \Delta x^* = R^{-1} R^{-\top} g.
529 for each observation. Let us now block partition :math:`\Delta x =
530 [\Delta y,\Delta z]` and :math:`g=[v,w]` to restate :eq:`normal`
534 \right]\left[ \begin{matrix} \Delta y \\ \Delta z
543 :math:`\Delta z` by observing that :math:`\Delta z = C^{-1}(w - E^\top
544 \Delta y)`, giving us
546 .. math:: \left[B - EC^{-1}E^\top\right] \Delta y = v - EC^{-1}w\ .
564 :math:`\Delta y`, and then back-substituting :math:`\Delta y` to
565 obtain the value of :math:`\Delta z`. Thus, the solution of what was
908 .. math:: \|\Delta x_k\|_\infty < \text{min_line_search_step_size}
911 :math:`\Delta x_k` is the step change in the parameter values at
1125 .. math:: \frac{|\Delta \text{cost}|}{\text{cost}} < \text{function_tolerance}
1127 where, :math:`\Delta \text{cost}` is the change in objective
1150 .. math:: \|\Delta x\| < (\|x\| + \text{parameter_tolerance}) * \text{parameter_tolerance}
1152 where :math:`\Delta x` is the step computed by the linear solver in
1518 \Delta f &= \frac{f((1 + \delta) x) - f(x)}{\delta x}