one_optimize#

Returns the optimal point and history using the Golden Section search [2]

\(\rule{125mm}{0.7pt} \\\) \(\textbf{Constant: } \displaystyle \varphi = \frac{(1 + \sqrt{5})}{2} \\\) \(\textbf{Input: } f(x) - \text{ function }; a, b - \text{ left and right bounds }; \varepsilon - \text{ precision } \\\) \(\rule{125mm}{0.3pt}\\\)

\(\text{while } |a - b| > \varepsilon: \\\) \(\qquad \displaystyle x_1 = b - \frac{b - a}{\varphi} \\\) \(\qquad \displaystyle x_2 = a + \frac{b - a}{\varphi} \\\) \(\qquad \text{if } f(x_1) > f(x_2): \\\) \(\qquad \qquad a = x_1 \\\) \(\qquad \text{else}: \\\) \(\qquad \qquad b = x_2 \\\) \(\rule{125mm}{0.3pt}\\\) \(\textbf{Return: } \displaystyle \frac{a+b}{2} \\\) \(\rule{125mm}{0.7pt} \\\)

Note

If optimization fails golden_section_search will return the last point

References

\(\rule{125mm}{0.2pt} \\\)

Parameters:
  • function (Callable[[float | torch.Tensor], float]) – callable that depends on the first positional argument. Other arguments are passed through kwargs

  • bounds (Tuple[float, float]) – tuple with two numbers. This is left and right bound optimization. [a, b]

  • epsilon (float) – optimization accuracy

  • type_optimization (Literal['min', 'max']) – ‘min’ / ‘max’ - type of required value

  • max_iter (int) – maximum number of iterations

  • verbose (bool) – flag of printing iteration logs

  • keep_history (bool) – flag of return history

Returns:

tuple with point and history.

Return type:

Tuple[float | torch.Tensor, HistoryGSS]

Examples

>>> def func(x): return 2.71828 ** (3 * x) + 5 * 2.71828 ** (-2 * x)
>>> point, data = golden_section_search(func, (-10, 10), type_optimization='min', keep_history=True)
successive_parabolic_interpolation(function, bounds, epsilon=1e-05, type_optimization='min', max_iter=500, verbose=False, keep_history=False)[source]#

Returns the optimal point and history using the Successive parabolic interpolation algorithm [3]

\(\rule{125mm}{0.7pt} \\\)

(1)#\[\displaystyle x_{i+1}=x_{i}+ \frac{1}{2}\left[\frac{\left(x_{i-1}-x_{i}\right)^{2} \left(f_{i}-f_{i-2}\right)+ \left(x_{i-2}-x_{i}\right)^{2}\left(f_{i-1}-f_{i}\right)}{\left(x_{i-1}-x_{i}\right) \left(f_{i}-f_{i-2}\right)+\left(x_{i-2}-x_{i}\right)\left(f_{i-1}-f_{i}\right)}\right]\\\]

\(\rule{125mm}{0.3pt}\\\)

\(\textbf{Input: } f(x) - \text{ function}; a, b - \text{ left and right bounds}; \varepsilon - \text{ precision } \\\) \(\rule{125mm}{0.3pt}\\\)

\(\displaystyle x_0 = a, \ f_0 = f(x_0); \qquad x_1 = b, \ f_1 = f(x_1); \qquad x_2 = \displaystyle \frac{a+b}{2}, \ f_2 = f(x_2)\\\) \(\text{while } |x_{i+1}-x_{i}| \geq \varepsilon\) or \(|f(x_{i+1})-f(x_{i})| \geq \varepsilon:\\\) \(\qquad \displaystyle x_0, x_1, x_2\) so that \(f_2 \leq f_1 \leq f_0\\\) \(\qquad \displaystyle \text{Calculate } x_{i + 1} \text{with the formula }\) (1) \(\\\)

\(\rule{125mm}{0.3pt}\\\) \(\textbf{Return: } \displaystyle x_{i+1} \\\) \(\rule{125mm}{0.7pt} \\\)

\(\rule{125mm}{0.2pt} \\\)

Parameters:
  • function (Callable[[float | torch.Tensor], float]) – callable that depends on the first positional argument. Other arguments are passed through kwargs

  • bounds (Tuple[float, float]) – tuple with two numbers. This is left and right bound optimization. [a, b]

  • epsilon (float) – optimization accuracy

  • type_optimization (Literal['min', 'max']) – ‘min’ / ‘max’ - type of required value

  • max_iter (int) – maximum number of iterations

  • verbose (bool) – flag of printing iteration logs

  • keep_history (bool) – flag of return history

Returns:

tuple with point and history.

Return type:

Tuple[float | torch.Tensor, HistorySPI]

Examples

>>> def func1(x): return x ** 3 - x ** 2 - x
>>> successive_parabolic_interpolation(func1, (0, 1.5), verbose=True)
Iteration: 0    |       x2 = 0.750      |       f(x2) = -0.891
Iteration: 1    |       x2 = 0.850      |       f(x2) = -0.958
Iteration: 2    |       x2 = 0.961      |       f(x2) = -0.997
Iteration: 3    |       x2 = 1.017      |       f(x2) = -0.999
Iteration: 4    |       x2 = 1.001      |       f(x2) = -1.000
...

>>> def func2(x): return - (x ** 3 - x ** 2 - x)
>>> successive_parabolic_interpolation(func2, (0, 1.5), type_optimization='max', verbose=True)
Iteration: 0    |       x2 = 0.750      |       f(x2) = 0.891
Iteration: 1    |       x2 = 0.850      |       f(x2) =  0.958
Iteration: 2    |       x2 = 0.961      |       f(x2) =  0.997
Iteration: 3    |       x2 = 1.017      |       f(x2) =  0.999
...
brent(function, bounds, epsilon=1e-05, type_optimization='min', max_iter=500, verbose=False, keep_history=False)[source]#

Returns the optimal point and history using the Brent’s algorithm [1].

\(\rule{125mm}{0.7pt} \\\) \(\textbf{Input: } f(x) - \text{ function }; a, b - \text{ left and right bounds }; \varepsilon - \text{ precision } \\\) \(\rule{125mm}{0.3pt}\\\)

\(\displaystyle \varphi = \frac{(1 + \sqrt{5})}{2} \\\) \(\displaystyle x_{least} = a + \varphi \cdot (b - a) \\\) \(\displaystyle x_{new} = x_{least} \\\) \(\displaystyle tolerance = \varepsilon \cdot | x_{least}| + 10^{-9} \\\)

\(\text{while }\displaystyle |x_{least} - \frac{a+b}{2}| > 2 \cdot tolerance - \frac{b-a}{2} :\\\)

\(\qquad \text{if }\displaystyle |x_{new} - x_{least}| > tolerance:\\\) \(\qquad \qquad \text{calculate parabolic } remainder \text{ by formula }\) (1) \(\\\) \(\qquad \text{if } \displaystyle remainder < previous \ remainder \ \& \ x_{least} + remainder \in (a, b):\\\)

\(\qquad \qquad \text{use ``parabolic" } \displaystyle remainder\\\)

\(\qquad \text{else:}\\\) \(\qquad \qquad \text{make ``golden" } \displaystyle remainder\\\) \(\qquad \qquad \text{use ``golden" } \displaystyle remainder\\\) \(\qquad \displaystyle x_{new} = x_{least} + remainder\\\)

\(\rule{125mm}{0.3pt}\\\) \(\textbf{Return: } \displaystyle x_{least} \\\) \(\rule{125mm}{0.7pt} \\\)

References

\(\rule{125mm}{0.2pt} \\\)

Parameters:
  • function (Callable[[float | torch.Tensor], float]) – callable that depends on the first positional argument. Other arguments are passed through kwargs

  • bounds (Tuple[float, float]) – tuple with two numbers. This is left and right bound optimization. [a, b]

  • epsilon (float) – optimization accuracy

  • type_optimization (Literal['min', 'max']) – ‘min’ / ‘max’ - type of required value

  • max_iter (int) – maximum number of iterations

  • verbose (bool) – flag of printing iteration logs

  • keep_history (bool) – flag of return history

Returns:

tuple with point and history.

Return type:

Tuple[float | torch.Tensor, HistoryBrent]

Examples

>>> brent(lambda x: x ** 3 - x ** 2 - x, (0,2), verbose=True)[0]
iteration 0     x = 0.763932,   f(x) = -0.901699        type : initial
iteration 1     x = 0.763932,   f(x) = -0.901699        type : golden
iteration 2     x = 0.763932,   f(x) = -0.901699        type : golden
iteration 3     x = 0.944272,   f(x) = -0.993962        type : golden
iteration 4     x = 0.944272,   f(x) = -0.993962        type : golden
iteration 5     x = 0.999120,   f(x) = -0.999998        type : parabolic
iteration 6     x = 0.999223,   f(x) = -0.999999        type : parabolic
iteration 7     x = 0.999223,   f(x) = -0.999999        type : golden
iteration 8     x = 0.999992,   f(x) = -1.000000        type : parabolic
iteration 9     x = 1.000002,   f(x) = -1.000000        type : parabolic
iteration 10    x = 1.000002,   f(x) = -1.000000        type : golden
iteration 11    x = 1.000002,   f(x) = -1.000000        type : parabolic
Searching finished. Successfully. code 0
1.0000016327177492