Regression metrics#
Regression metrics are used to evaluate the performance of regression models, which are machine learning models that predict continuous numeric values rather than discrete classes.
As usual, denote dataset \(\mathcal D = \{(\boldsymbol x_i, y_i)\}_{i=1}^n\), \(y_i \in \mathbb R\), and let \(\widehat y_i\) be predictions of some regression model. Regression metrics show how good this predictions are.
Mean Squared Error (MSE)#
MSE calculates the average squared difference between the predicted values and the actual target values.
MSE gives more weight to large errors compared to smaller errors because it squares the differences between predicted and actual values. This can be advantageous or not depending on the task.
Advantages of MSE
MSE is a smooth metric which makes it suitable for gradient optimization
Mathematical convenience
Disadvantages of MSE
MSE is highly sensitive to outliers in the data
MSE is not scale-invariant which hurts interpretability
The squaring operation in MSE places more emphasis on larger errors
Root Mean Squared Error (RMSE)#
RMSE is the square root of MSE:
It provides a measure of the standard deviation of prediction errors and is in the same units as the target variable. Thus, RMSE is more interpretable than MSE.
\(R^2\)-score#
To overcome some flaws of MSE the coefficient of determination (or \(R^2\)-score) used:
The coefficient of determination shows proportion of variance explained. \(R^2\)-score does not exceed \(1\) (the greater — the better).
Mean Absolute Error (MAE)#
MAE calculates the average absolute difference between the predicted values and the actual target values.
It gives an indication of how far off the predictions are on average.
Advantages of MAE
MAE is straightforward to understand and interpret
MAE is less sensitive to outliers compared to some other metrics like MSE or RMSE
MAE is scale-invariant, meaning it doesn’t change if the units of measurement of the target variable change
Disadvantages of MAE
MAE is not differentiable at zero, which can cause issues when trying to use it in gradient optimization algorithms
While MAE is less sensitive to outliers than some other metrics, it is not completely immune to their influence; extreme outliers can still have a noticeable impact on MAE
MAPE#
Mean Absolute Percentage Error (MAPE) metric is commonly used to measure the accuracy of forecasts or predictions, especially in time series forecasting and demand forecasting.
This value is undefinded if \(y_i = 0\), that’s why sometimes another version of MAPE is used — symmetric mean absolute percentage error (sMAPE):
Q. Does MAPE necessary belongs to interval \([0, 1]\)?
Simulated example#
Take some line and add random noise to it:
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'svg'
xs = np.linspace(0, 1, num=100, endpoint=False)
a, b = -0.5, 1.7
y = a * xs + b + 0.5*np.random.randn(100)
plt.plot(xs, a * xs + b, c="r", lw=2, label="Ground truth")
plt.scatter(xs, y, c="b", s=10, label="Data")
plt.legend()
plt.grid(ls=":");
Fit linear regression model and check metrics:
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, mean_absolute_percentage_error, r2_score
lin_reg = LinearRegression()
lin_reg.fit(xs[:, None], y)
y_hat = lin_reg.predict(xs[:, None])
print("Bias:", lin_reg.intercept_)
print("Slope:", lin_reg.coef_[0])
mae = mean_absolute_error(y, y_hat)
mse = mean_squared_error(y, y_hat)
rmse = np.sqrt(mse)
R2 = r2_score(y, y_hat)
mape = mean_absolute_percentage_error(y, y_hat)
print(f"Mean Absolute Error (MAE): {mae:.4f}")
print(f"Mean Squared Error (MSE): {mse:.4f}")
print(f"Root Mean Squared Error (RMSE): {rmse:.4f}")
print(f"R2-score: {R2:.4f}")
print(f"Mean absolute percentage error(MAPE): {mape:.4f}")
Bias: 1.6147866931424257
Slope: -0.2978171096896356
Mean Absolute Error (MAE): 0.4457
Mean Squared Error (MSE): 0.3031
Root Mean Squared Error (RMSE): 0.5505
R2-score: 0.0238
Mean absolute percentage error(MAPE): 0.3983
Now make one point to be an outlier:
M = 20
y[np.random.randint(len(y))] += M
plt.plot(xs, a * xs + b, c="r", lw=2, label="Ground truth")
plt.scatter(xs, y, c="b", s=10, label="Data")
plt.legend()
plt.grid(ls=":");
Fit linear regression once again:
lin_reg = LinearRegression()
lin_reg.fit(xs[:, None], y)
y_hat = lin_reg.predict(xs[:, None])
print("Bias:", lin_reg.intercept_)
print("Slope:", lin_reg.coef_[0])
Bias: 1.4761728317562872
Slope: 0.3862512971510485
Print metrics:
mae = mean_absolute_error(y, y_hat)
mse = mean_squared_error(y, y_hat)
rmse = np.sqrt(mse)
R2 = r2_score(y, y_hat)
mape = mean_absolute_percentage_error(y, y_hat)
print(f"Mean Absolute Error (MAE): {mae:.4f}")
print(f"Mean Squared Error (MSE): {mse:.4f}")
print(f"Root Mean Squared Error (RMSE): {rmse:.4f}")
print(f"R2-score: {R2:.4f}")
print(f"Mean absolute percentage error(MAPE): {mape:.4f}")
Mean Absolute Error (MAE): 0.6931
Mean Squared Error (MSE): 4.0067
Root Mean Squared Error (RMSE): 2.0017
R2-score: 0.0031
Mean absolute percentage error(MAPE): 0.5090