Gradient descent with optimal step#

\(\rule{125mm}{0.7pt}\)

Algorithm Flowchart

\(\rule{125mm}{0.7pt}\)

gradient descent with optimal step algorithmgradient descent with optimal step algorithm

\(\rule{125mm}{0.7pt}\)

gd_optimal(function, x0, epsilon=1e-05, max_iter=500, verbose=False, keep_history=False)[source]#

Returns a tensor n x 1 with optimal point and history using Algorithm with optimal step. The idea is to choose a gamma that minimizes the function in the direction \(\ f'(x_k)\)

Parameters:
  • function (Callable[[Tensor], Tensor]) – callable that depends on the first positional argument

  • x0 (Tensor) – Torch tensor which is initial approximation

  • epsilon (float) – optimization accuracy

  • max_iter (int) – maximum number of iterations

  • verbose (bool) – flag of printing iteration logs

  • keep_history (bool) – flag of return history

Returns:

tuple with point and history.

Return type:

Tuple[Tensor, HistoryGD]

Examples

>>> def func(x): return -torch.exp(- x[0] ** 2 - x[1] ** 2)
>>> x_0 = torch.tensor([1, 2])
>>> solution = gd_optimal(func, x_0)
>>> print(solution[0])
tensor([9.2070e-08, 1.8405e-07], dtype=torch.float64)