Regression Performance

- Works for a
**single model**or helps compare the**two** - Displays a variety of plots related to the
**performance**and**errors** - Helps explore areas of
**under-**and**overestimation**

Summary

The **Regression Performance** report evaluates the quality of a regression model.

It can also compare it to the past performance of the same model, or the performance of an alternative model.

Requirements

To run this report, you need to have input features, and **both target and prediction** columns available.

To generate a comparative report, you will need **two** datasets. The **reference** dataset serves as a benchmark. We analyze the change by comparing the **current** production data to the **reference** data.

You can also run this report for a **single**

`DataFrame`

, with no comparison performed. In this case, pass it as `reference_data`

.How it looks

The report includes 12 components. All plots are interactive.

We calculate a few standard model quality metrics: Mean Error (ME), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE).

For each quality metric, we also show one standard deviation of its value (in brackets) to estimate the stability of the performance.

2. **Predicted vs Actual**

Predicted versus actual values in a scatter plot.

3. **Predicted vs Actual in Time**

Predicted and Actual values over time or by index, if no datetime is provided.

4. Error (Predicted - Actual)

Model error values over time or by index, if no datetime is provided.

5. Absolute Percentage Error

Absolute percentage error values over time or by index, if no datetime is provided.

6. Error Distribution

Distribution of the model error values.

7. Error Normality

We show a summary of the model quality metrics for each of the two groups: mean Error (ME), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE).

We plot the predictions, coloring them by the group they belong to. It visualizes the regions where the model underestimates and overestimates the target function.

This table helps quickly see the differences in feature values between the 3 groups:

**OVER**(top-5% of predictions with overestimation)**UNDER**(top-5% of the predictions with underestimation)**MAJORITY**(the rest 90%)

For the numerical features, it shows the mean value per group. For the categorical features, it shows the most common value.

If you have two datasets, the table displays the values for both REF (reference) and CURR (current).

If you observe a large difference between the groups, it means that the model error is sensitive to the values of a given feature.

Here is the formula used to calculate the Range %:

$Range = 100*|(Vover-Vunder)/(Vmax-Vmin)|$

For each feature, we show a histogram to visualize the **distribution of its values in the segments with extreme errors** and in the rest of the data. You can visually explore if there is a relationship between the high error and the values of a given feature.

Here is an example where extreme errors are dependent on the "temperature" feature.

12. Predicted vs Actual per Feature

For each feature, we also show the Predicted vs Actual scatterplot. We use colors to show the distribution of the values of a given feature. It helps visually detect and explore underperforming segments which might be sensitive to the values of the given feature.

Report customization

You can select which components of the reports to display or choose to show the short version of the report: Select Widgets.

When to use the report

Here are our suggestions on when to use itâ€”you can also combine it with the Data Drift and Numerical Target Drift reports to get a comprehensive picture.

JSON Profile

If you choose to generate a JSON profile, it will contain the following information:

{

"regression_performance": {

"name": "regression_performance",

"datetime": "datetime",

"data": {

"utility_columns": {

"date": "date",

"id": null,

"target": "target",

"prediction": "prediction"

},

"cat_feature_names": [],

"num_feature_names": [],

"metrics": {

"reference": {

"mean_error": mean_error,

"mean_abs_error": mean_abs_error,

"mean_abs_perc_error": mean_abs_perc_error,

"error_std": error_std,

"abs_error_std": abs_error_std,

"abs_perc_error_std": abs_perc_error_std,

"error_normality": {

"order_statistic_medians": [],

"slope": slope,

"intercept": intercept,

"r": r

},

"underperformance": {

"majority": {

"mean_error": mean_error,

"std_error": std_error

},

"underestimation": {

"mean_error": mean_error,

"std_error": std_error

},

"overestimation": {

"mean_error": mean_error,

"std_error": std_error

}

}

},

"current": {

"mean_error": mean_error,

"mean_abs_error": mean_abs_error,

"mean_abs_perc_error": mean_abs_perc_error,

"error_std": error_std,

"abs_error_std": abs_error_std,

"abs_perc_error_std": abs_perc_error_std,

"error_normality": {

"order_statistic_medians": [],

"slope": slope,

"intercept": intercept,

"r": r

},

"underperformance": {

"majority": {

"mean_error": mean_error,

"std_error": std_error

},

"underestimation": {

"mean_error": mean_error,

"std_error": std_error

},

"overestimation": {

"mean_error": mean_error,

"std_error": std_error

}

}

},

"error_bias": {

"feature_name": {

"feature_type": "num",

"ref_majority": ref_majority,

"ref_under": ref_under,

"ref_over": ref_over,

"ref_range": ref_range,

"prod_majority": prod_majority,

"prod_under": prod_under,

"prod_over": prod_over,

"prod_range": prod_range

},

"holiday": {

"feature_type": "cat",

"ref_majority": 0,

"ref_under": 0,

"ref_over": 0,

"ref_range": 0,

"prod_majority": 0,

"prod_under": 0,

"prod_over": 1,

"prod_range": 1

},

}

}

}

},

"timestamp": "timestamp"

}

Examples

- See a tutorial "How to break a model in 20 days" where we create a demand prediction model and analyze its gradual decay.

Last modified 11d ago

Copy link

Outline

Summary

Requirements

How it looks

1. Model Quality Summary Metrics

2. Predicted vs Actual

3. Predicted vs Actual in Time

4. Error (Predicted - Actual)

5. Absolute Percentage Error

6. Error Distribution

7. Error Normality

8. Mean Error per Group

9. Predicted vs Actual per Group

10. Error Bias: Mean/Most Common Feature Value per Group

11. Error Bias per Feature

12. Predicted vs Actual per Feature

Report customization

When to use the report

JSON Profile

Examples