Gradient descent with fractional step#

\(\rule{125mm}{0.7pt}\)

Algorithm Flowchart

\(\rule{125mm}{0.7pt}\)

gradient descent with fractional step algorithmgradient descent with fractional step algorithm

\(\rule{125mm}{0.7pt}\)

gd_frac(function, x0, epsilon=1e-05, gamma=0.1, delta=0.1, lambda0=0.1, max_iter=500, verbose=False, keep_history=False)[source]#

Returns a tensor n x 1 with optimal point and history using Algorithm with fractional step.

Requirements: \(\ 0 < \lambda_0 < 1\) is the step multiplier, \(0 < \delta < 1\) influence on step size.

Parameters:
  • function (Callable[[Tensor], Tensor]) – callable that depends on the first positional argument

  • x0 (Tensor) – Torch tensor which is initial approximation

  • epsilon (float) – optimization accuracy

  • gamma (float) – gradient step

  • delta (float) – value of the crushing parameter

  • lambda0 (float) – initial step

  • max_iter (int) – maximum number of iterations

  • verbose (bool) – flag of printing iteration logs

  • keep_history (bool) – flag of return history

Returns:

tuple with point and history.

Return type:

Tuple[Tensor, HistoryGD]

Examples

>>> def func(x): return x[0] ** 2 + x[1] ** 2
>>> x_0 = torch.tensor([1, 2])
>>> solution = gd_frac(func, x_0)
>>> print(solution[0])
tensor([1.9156e-06, 3.8312e-06], dtype=torch.float64)