ml.regression#

class BaseRegressionModel[source]#

Bases: Module

Base model for regression.

Variables:
  • w – weights of model

  • _best_state – _best_state while model training

  • _best_loss – _best_loss while model training

__init__()[source]#

Initialization of base model for different regression

Return type:

None

fit(x, y, epochs=2000, lr=0.0001, l1_constant=0.0, l2_constant=0.0, show_epoch=0, print_function=<built-in function print>)[source]#

Returns trained model of Regression

Target function

Training happens by minimizing loss function:

(1)#\[\mathcal{L}(w) = \lambda_{1} \Vert w \Vert_{1} + \lambda_{2} \Vert w \Vert_{2} + \frac{1}{n}\sum_{i = 1}^{n} (\hat y_i - y_i)^2 \longrightarrow \min_{w}\]

\(x_{i} \in \mathbb{R}^{1\times m}, w \in \mathbb{R}^{m \times 1}, y_{i} \in \mathbb{R}^{1}\)

Parameters:
  • x (Tensor) – training set

  • y (Tensor) – target value

  • epochs (int) – max number of sgd implements

  • lr (float) – Adam optimizer learning rate

  • show_epoch (int) – amount of showing epochs

  • l1_constant (float) – parameter of l1 regularization

  • l2_constant (float) – parameter of l2 regularization

  • print_function (Callable) – a function that will print verbose

Returns:

trained model

Return type:

Module

forward(x, transformed=True)[source]#

Returns transform(x) @ w, w is the weights for each parameter, transform(x) is some transformed matrix.

  1. Linear transformed - same matrix

  2. Polynomial transform check polynomial

  3. Exponential transform check exponential

For linear returns x @ w + b, w is the weights for each parameter, and b is the bias

(2)#\[\hat Y_{n \times 1} = X_{n \times m} \cdot W_{m \times 1} + b \cdot I_{n \times 1} = \begin{bmatrix} w_1 x_{1, 1} + w_2 x_{1, 2} + \dots + w_m + x_{1, m} + b\\ \vdots \\ w_1 x_{n, 1} + w_2 x_{n, 2} + \dots + w_m + x_{n, m} + b \\ \end{bmatrix}\]

For non linear:

(3)#\[\hat Y_{n \times 1} = X_{\operatorname{transformed}} \cdot W\]
Parameters:
  • x (Tensor) – input observations, tensor n x m (n is the number of observations that have m parameters)

  • transformed (bool) – the flag of the converted x. if true, x will not be converted

Returns:

regression value

Return type:

Tensor

init_weights(x)[source]#

Initializing weights

Parameters:

x (Tensor) – input observations, tensor n x m (n is the number of observations that have m parameters)

Return type:

None

metrics_tab(x, y)[source]#

Returns metrics dict with r2, mae, mse, mape

Parameters:
  • x (Tensor) – training set

  • y (Tensor) – target value of regression

Returns:

r2, mae, mse, mape

Return type:

dict

static transform(x)[source]#

Returns transformed x. Default is return x without changes

Parameters:

x (Tensor) – torch tensor

Returns:

transformed torch tensor

Return type:

Tensor

class LinearRegression[source]#

Bases: BaseRegressionModel

Model:

(4)#\[\hat y(x) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_2 + \dots + w_m \cdot x_m\]
__init__(bias=True)[source]#

Initialization of base model for different regression

Parameters:

bias (bool) –

Return type:

None

class PolynomialRegression[source]#

Bases: BaseRegressionModel

Polynomial regression model:

(5)#\[\hat y(x) = \sum_{\alpha_1 + \dots + \alpha_m \leq k} w_i \cdot x_1 ^ {\alpha_1} \circ x_2 ^ {\alpha_2} \cdot \dots \circ x_m ^ {\alpha_m} \]

\(\alpha_i \in \mathbb{Z}_+, x_i - i\) column from \(x\) matrix

\(x_i, y, \hat y \in \mathbb{R}^{n \times 1}, \circ\) - hadamard product (like np.array * np.array)

__init__(degree)[source]#
Parameters:

degree (int) – degree of polynomial regression

Return type:

None

transform(x)[source]#

Returns poly-transformed data

Parameters:

x (Tensor) – torch tensor

Returns:

transformed tensor

Return type:

Tensor

class ExponentialRegression[source]#

Bases: BaseRegressionModel

Exponential regression

(6)#\[\hat y_i = \exp(w_0 + w_1 \cdot x_1 + w_2 \cdot x_2 + \dots + w_m \cdot x_m)\]
__init__()[source]#

Initialization of base model for different regression

fit(x, y, epochs=5000, lr=0.0001, l1_constant=0.0, l2_constant=0.0, show_epoch=0, print_function=<built-in function print>)[source]#

Returns trained model of exponential Regression

Parameters:
  • x (Tensor) – training set

  • y (Tensor) – target value

  • epochs (int) – max number of sgd implements

  • lr (float) – Adam optimizer learning rate

  • show_epoch (int) – amount of showing epochs

  • l1_constant (float) – parameter of l1 regularization

  • l2_constant (float) – parameter of l2 regularization

  • print_function (Callable) – a function that will print verbose

Returns:

trained model

Return type:

Module

forward(x, transformed=True)[source]#

Returns exponential regression function

(7)#\[\hat y = \exp (x \cdot w + b) \]
Parameters:
  • x (Tensor) – input observations, tensor n x m (n is the number of observations that have m parameters)

  • transformed (bool) – flag of transformed x

Returns:

regression value

Return type:

Tensor