visualize.ml_animation#
- gen_classification_plot(x_tensor, y_true, model=None, threshold=0.5, cnt_points=1000, k=0.1, title=None, epsilon=0.0001, insert_na=False)[source]#
Returns a graph with a distribution and an optional line. If dim(x) = 2, then you can get model. If dim(x) > 2, then returns graph of TSNE from sklearn.manifold with default settings. dim(x) is not support
Note
if model os linear and have one layer, simple activation function, then visualization will faster
Warning
if the model is heavy, then you should reduce cnt_points, but the probability of missing points is higher, and the visualization will be rather incorrect. You can increase the gap by increasing the epsilon.
- Parameters:
x_tensor (Tensor) – training tensor
y_true (Tensor) – target tensor. array with true values of binary classification
model (Optional[Module]) – some model that returns a torch tensor with class 1 probabilities using the call: model(x)
threshold (float) – if model(xi) >= threshold, then yi = 1
cnt_points (int) – number of points on each of the two axes when dim(x) = 2
k (float) – constant for draw on section: [x.min() - (x.max() - x.min()) * k, x.max() + (x.max() - x.min()) * k]
title (Optional[str]) – title of plots
epsilon (float) – contour line points: \(\{x\in \mathbb{R}^2 \, | \, \text{threshold} - \text{epsilon} \le \text{model}(x) \le \text{threshold} + \text{epsilon}\}\)
insert_na (bool) – na insertion flag when two points too far away
- Returns:
scatter plot go.Figure
- Return type:
Figure
>>> from sklearn.datasets import make_moons >>> torch.random.manual_seed(7) >>> x, y = make_moons(1000, noise=0.15, random_state=7) >>> x, y = torch.tensor(x), torch.tensor(y) >>> lr_rbf = LogisticRegressionRBF(x[:50]) >>> lr_rbf.fit(x, y, epochs=5000) >>> lr_rbf.metrics_tab(x, y)
{'recall': 0.9980000257492065, 'precision': 0.9842209219932556, 'accuracy': 0.9909999966621399, 'f1': 0.9910625822119956, 'auc_roc': 0.9995800006320514}
>>> gen_classification_plot(x, y, model, threshold=0.5, epsilon=0.001)
- roc_curve_plot(y_true, y_prob, fill=False)[source]#
Return figure with plotly.Figure ROC curve
- Parameters:
y_true (Tensor) – array with true values of binary classification
y_prob (Tensor) – array of probabilities of confidence of belonging to the 1st class
fill (bool) – flag for filling the area under the curve
- Returns:
go.Figure
- Return type:
Figure
>>> yt = torch.tensor([1, 1, 0, 0, 1, 0]) >>> yp = torch.tensor([0.7, 0.6, 0.3, 0.5, 0.4, 0.4]) >>> roc_curve_plot(yt, yp)
- gen_regression_plot(x_tensor, y_tensor, model=None, title='<b>Scatter plot</b>')[source]#
Returns a graph with a regression and scatter of initial distribution.
Note
Support 1d x_tensor. If x_tensor n_d method applied t-SNE
- Parameters:
x_tensor (Tensor) – training tensor
y_tensor (Tensor) – target tensor. array with true regression values
model (Optional[Module]) – some model that returns a torch tensor with class 1 probabilities using the call: model(x)
title (Optional[str]) – title of plots
- Returns:
scatter plot go.Figure and line of regression
- Return type:
Figure
>>> from sklearn.datasets import make_regression >>> x, y = make_regression(200, 1, noise=20, random_state=21) >>> x, y = torch.tensor(x), torch.tensor(y) >>> regression = LinearRegression().fit(x, y) >>> gen_regression_plot(x, y, regression)
>>> # Let's create 4-dimensional data and perform a linear regression.
>>> # After that, t-sne will show the data on the plane
>>> x, y = make_regression(200, 4, noise=20, random_state=21)
>>> x, y = torch.tensor(x), torch.tensor(y)
>>> regression = LinearRegression().fit(x, y)
>>> gen_regression_plot(x, y, regression)
>>> regression.metrics_tab(x, y)
{'r2': 0.9711183309555054,
'mae': 15.044872283935547,
'mse': 365.99530029296875,
'mape': 55.71377182006836}