We also calculate the Pearson correlation between the target (prediction) and each individual feature in the two datasets to detect a change in the relationship.
You can modify the drift detection logic by selecting a statistical test already available in the library, including PSI, K–L divergence, Jensen-Shannon distance, Wasserstein distance. See more details about available tests. You can also set a different confidence level or implement a custom test, by defining custom options.
How it looks
The report includes 4 components. All plots are interactive.
1. Target (Prediction) Drift
The report first shows the comparison of target (prediction) distributions in the current and reference dataset. The result of the statistical test and P-value are displayed in the title.
2. Target (Prediction) Correlations
The report shows the correlations between individual features and the target (prediction) in the current and reference dataset. It helps detects shifts in the relationship.
3. Target (Prediction) Values
The report visualizes the target (prediction) values by index or time (if thedatetime column is available or defined in the column_mapping dictionary). This plot helps explore the target behavior and compare it between the datasets.
4. Target (Prediction) Behavior By Feature
Finally, we generate an interactive table with the visualizations of dependencies between the target and each feature.
If you click on any feature in the table, you get an overview of its behavior.
The plot shows how feature values relate to the target (prediction) values and if there are differences between the datasets. It helps explore if they can explain the target (prediction) shift.
We recommend paying attention to the behavior of the most important features since significant changes might confuse the model and cause higher errors.
For example, in a Boston house pricing dataset, we can see a new segment with values of TAX above 600 but the low value of the target (house price).
1. Before model retraining. Before feeding fresh data into the model, you might want to verify whether it even makes sense.
2. When you are debugging the model decay. If you observe a drop in performance, this report can help see what has changed.
3. When you are flying blind, and no ground truth is available. If you do not have immediate feedback, you can use this report to explore the changes in the model output and the relationship between the features and prediction. This can help anticipate data and concept drift.
If you choose to generate a JSON profile, it will contain the following information: